Expected hostname at index 7 for neo4j bolt (3.5.21) - amazon-ec2

We run neo4j (3.5.21) in an EC2 instance. Today, after I restarted the server, noticed this error:
Expected hostname at index 7: bolt://:7687". Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter#75401424' was successfully initialized, but failed to start
Service start logs:
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
Started neo4j (pid 22577). It is available at http://0.0.0.0:7474/
There may be a short delay until the server is ready.
See /var/log/neo4j/neo4j.log for current status.
This is what I see in neo4j.log:
2022-12-03 20:29:49.886+0000 INFO Bolt enabled on 0.0.0.0:7687.
2022-12-03 20:29:51.968+0000 INFO Started.
2022-12-03 20:29:52.121+0000 INFO Stopping...
2022-12-03 20:29:52.231+0000 INFO Stopped.
2022-12-03 20:29:52.233+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter#75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter#75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687".
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter#75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687".
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:45)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:187)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:124)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:91)
at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:32)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter#75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:473)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:180)
... 3 more
Caused by: org.neo4j.graphdb.config.InvalidSettingException: Unable to construct bolt discoverable URI using '' as hostname: Expected hostname at index 7: bolt://:7687
at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:133)
at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.lambda$addBoltConnectorFromConfig$1(DiscoverableURIs.java:155)
at java.util.Optional.ifPresent(Optional.java:159)
at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.addBoltConnectorFromConfig(DiscoverableURIs.java:145)
at org.neo4j.server.rest.discovery.CommunityDiscoverableURIs.communityDiscoverableURIs(CommunityDiscoverableURIs.java:38)
at org.neo4j.server.CommunityNeoServer.lambda$createDBMSModule$0(CommunityNeoServer.java:99)
at org.neo4j.server.modules.DBMSModule.start(DBMSModule.java:59)
at org.neo4j.server.AbstractNeoServer.startModules(AbstractNeoServer.java:249)
at org.neo4j.server.AbstractNeoServer.access$700(AbstractNeoServer.java:102)
at org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter.start(AbstractNeoServer.java:541)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Caused by: java.net.URISyntaxException: Expected hostname at index 7: bolt://:7687
at java.net.URI$Parser.fail(URI.java:2847)
at java.net.URI$Parser.failExpecting(URI.java:2853)
at java.net.URI$Parser.parseHostname(URI.java:3389)
at java.net.URI$Parser.parseServer(URI.java:3235)
at java.net.URI$Parser.parseAuthority(URI.java:3154)
at java.net.URI$Parser.parseHierarchical(URI.java:3096)
at java.net.URI$Parser.parse(URI.java:3052)
at java.net.URI.<init>(URI.java:673)
at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:128)
... 15 more
2022-12-03 20:29:52.243+0000 INFO Neo4j Server shutdown initiated by request
EC2: t3.large
OS: Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1092-aws x86_64)
I have already tried restarting the server, restarting the service multiple times without any success. We have not changed anything on the networking (vpc, subnet, security groups, network interface, etc)
Curious if there's a config I am missing. Any help will be much appreciated.

Related

Payara Server 5 is not starting by Console on macOS/Windows

In macOS, I was trying to use Payara Server with Netbeans 12 and I got:
Launching Payara Server on Felix platform
INFO: Create bundle provisioner class = class com.sun.enterprise.glassfish.bootstrap.osgi.BundleProvisioner.
Registered com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRuntime#462c1ddf in service registry.
#!## LogManagerService.postConstruct : rootFolder=/Users/joseluisbz/Documentos/Java/payara5-2020-4/glassfish
#!## LogManagerService.postConstruct : templateDir=/Users/joseluisbz/Documentos/Java/payara5-2020-4/glassfish/lib/templates
#!## LogManagerService.postConstruct : src=/Users/joseluisbz/Documentos/Java/payara5-2020-4/glassfish/lib/templates/logging.properties
#!## LogManagerService.postConstruct : dest=/Users/joseluisbz/Documentos/Java/payara5-2020-4/glassfish/domains/domain1/config/logging.properties
Running Payara Version: Payara Server 5.2020.4 #badassfish (build 817)|#]
Server log file is using Formatter class: com.sun.enterprise.server.logging.ODLLogFormatter|#]
HV000001: Hibernate Validator 6.1.2.Final|#]
[192.168.0.11]:4900 [development] [3.12.6] Connection[id=1, /192.168.0.11:49587->/192.168.0.11:5900, qualifier=null, endpoint=[192.168.0.11]:5900, alive=false, type=NONE] closed. Reason: Exception in Connection[id=1, /192.168.0.11:49587->/192.168.0.11:5900, qualifier=null, endpoint=[192.168.0.11]:5900, alive=true, type=NONE], thread=hz._hzInstance_1_development.IO.thread-in-0
java.lang.IllegalStateException: Unknown protocol: RFB
at com.hazelcast.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:107)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:135)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:369)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:354)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:280)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:235)
|#]
Then by console, here my position (I renamed payara5 directory to payara5-2020-4).
% pwd
.../payara5-2020-4/glassfish/bin
%
In order to fix the first problem:
% ./asadmin set-hazelcast-configuration --enabled=false
Remote server does not listen for requests on [localhost:4848]. Is the server up?
No such local command: set-hazelcast-configuration. Unable to access the server to execute the command remotely. Verify the server is available.
Command set-hazelcast-configuration failed.
%
After I was trying to up...
% ./asadmin start-domain domain1
Waiting for domain1 to start ..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
No response from the Domain Administration Server (domain1) after 600 seconds.
The command is either taking too long to complete or the server has failed.
Please see the server log files for command status.
Please start with the --verbose option in order to see early messages.
Command start-domain failed.
%
Then, I was trying to verbose option (like recommendation)...
% ./asadmin start-domain domain1 --verbose
Command start-domain only accepts one operand
...
% ./asadmin --verbose start-domain domain1
Invalid option: --verbose
...
% ./asadmin -v start-domain domain1
Invalid option: -v
...
% ./asadmin start-domain domain1 -v
Command start-domain only accepts one operand
...
The common message
Usage: asadmin [asadmin-utility-options] start-domain
[-v|--verbose[=<verbose(default:false)>]]
[--upgrade[=<upgrade(default:false)>]]
[-w|--watchdog[=<watchdog(default:false)>]]
[-d|--debug[=<debug(default:false)>]]
[-n|--dry-run[=<dry-run(default:false)>]]
[--drop-interrupted-commands[=<drop-interrupted-commands(default:false)>]]
[--prebootcommandfile <prebootcommandfile>]
[--postbootcommandfile <postbootcommandfile>] [--domaindir <domaindir>]
[-?|--help[=<help(default:false)>]] [domain_name]
Sadly, I believe strongly that Payara is inmature product.
But, How Can I solve all these errors/mistakes?
EDIT:
I was testing on Windows 10 PRO with Netbeans 12
Launching Payara Server on Felix platform
INFO: Create bundle provisioner class = class com.sun.enterprise.glassfish.bootstrap.osgi.BundleProvisioner.
Registered com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRuntime#586f5c68 in service registry.
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:109)
at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:54)
Caused by: A MultiException has 2 exceptions. They are:
1. com.sun.enterprise.module.ResolveError: Failed to start OSGiModuleImpl:: Bundle = [fish.payara.server.internal.batch.glassfish-batch-connector [102]], State = [NEW]
2. java.lang.IllegalStateException: Could not load descriptor SystemDescriptor(
implementation=org.glassfish.batch.spi.impl.BatchRuntimeConfigurationInjector
name=batch-runtime-configuration
contracts={org.glassfish.batch.spi.impl.BatchRuntimeConfigurationInjector,org.jvnet.hk2.config.ConfigInjector}
scope=javax.inject.Singleton
qualifiers={org.jvnet.hk2.config.InjectionTarget}
descriptorType=CLASS
descriptorVisibility=NORMAL
metadata=#table-suffix={optional,default\:,datatype\:java.lang.String,leaf},#data-source-lookup-name={optional,datatype\:java.lang.String,leaf},#table-prefix={optional,default\:,datatype\:java.lang.String,leaf},#schema-name={optional,default\:APP,datatype\:java.lang.String,leaf},#executor-service-lookup-name={optional,default\:concurrent/__defaultManagedExecutorService,datatype\:java.lang.String,leaf},target={org.glassfish.batch.spi.impl.BatchRuntimeConfiguration},Bundle-SymbolicName={fish.payara.server.internal.batch.glassfish-batch-connector},Bundle-Version={5.2020.4}
rank=0
loader=OsgiPopulatorPostProcessor.HK2Loader(OSGiModuleImpl:: Bundle = [fish.payara.server.internal.batch.glassfish-batch-connector [102]], State = [NEW],1228963996)
proxiable=null
proxyForSameScope=null
analysisName=null
id=170
locatorId=0
identityHashCode=373437697
reified=false)
at org.jvnet.hk2.internal.ServiceLocatorImpl.loadClass(ServiceLocatorImpl.java:2247)
at org.jvnet.hk2.internal.ServiceLocatorImpl.reifyDescriptor(ServiceLocatorImpl.java:438)
at org.jvnet.hk2.internal.ServiceLocatorImpl.reifyDescriptor(ServiceLocatorImpl.java:457)
at org.jvnet.hk2.config.DomDocument$InjectionTargetFilter.matches(DomDocument.java:184)
at org.jvnet.hk2.internal.ServiceLocatorImpl.getDescriptors(ServiceLocatorImpl.java:347)
at org.jvnet.hk2.internal.ServiceLocatorImpl.getDescriptors(ServiceLocatorImpl.java:389)
at org.jvnet.hk2.internal.ServiceLocatorImpl.getBestDescriptor(ServiceLocatorImpl.java:397)
at org.jvnet.hk2.config.DomDocument.buildModel(DomDocument.java:135)
at org.jvnet.hk2.config.ConfigModel.parseValue(ConfigModel.java:959)
at org.jvnet.hk2.config.ConfigModel.<init>(ConfigModel.java:875)
at org.jvnet.hk2.config.DomDocument.buildModel(DomDocument.java:114)
at org.jvnet.hk2.config.DomDocument.getModelByElementName(DomDocument.java:162)
at org.jvnet.hk2.config.ConfigParser.handleElement(ConfigParser.java:165)
at org.jvnet.hk2.config.ConfigParser.parse(ConfigParser.java:101)
at org.jvnet.hk2.config.ConfigParser.parse(ConfigParser.java:95)
at org.glassfish.config.support.DomainXml.parseDomainXml(DomainXml.java:271)
at org.glassfish.config.support.DomainXml.run(DomainXml.java:121)
at org.jvnet.hk2.config.ConfigurationPopulator.populateConfig(ConfigurationPopulator.java:58)
at org.glassfish.hk2.bootstrap.HK2Populator.populateConfig(HK2Populator.java:83)
at com.sun.enterprise.module.common_impl.AbstractModulesRegistryImpl.populateConfig(AbstractModulesRegistryImpl.java:190)
at com.sun.enterprise.module.bootstrap.Main.createServiceLocator(Main.java:249)
at org.jvnet.hk2.osgiadapter.HK2Main.createServiceLocator(HK2Main.java:95)
at com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRuntime.newGlassFish(EmbeddedOSGiGlassFishRuntime.java:95)
at com.sun.enterprise.glassfish.bootstrap.GlassFishRuntimeDecorator.newGlassFish(GlassFishRuntimeDecorator.java:68)
at com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntime.newGlassFish(OSGiGlassFishRuntime.java:91)
at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(GlassFishMain.java:125)
... 6 more
Caused by: com.sun.enterprise.module.ResolveError: Failed to start OSGiModuleImpl:: Bundle = [fish.payara.server.internal.batch.glassfish-batch-connector [102]], State = [NEW]
at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.start(OSGiModuleImpl.java:193)
at org.jvnet.hk2.osgiadapter.OsgiPopulatorPostProcessor$1.loadClass(OsgiPopulatorPostProcessor.java:54)
at org.jvnet.hk2.internal.ServiceLocatorImpl.loadClass(ServiceLocatorImpl.java:2239)
... 31 more
Caused by: org.osgi.framework.BundleException: Unable to resolve fish.payara.server.internal.batch.glassfish-batch-connector [102](R 102.0): missing requirement [fish.payara.server.internal.batch.glassfish-batch-connector [102](R 102.0)] osgi.wiring.package; (osgi.wiring.package=com.ibm.jbatch.spi) [caused by: Unable to resolve fish.payara.server.internal.batch.payara-jbatch [311](R 311.0): missing requirement [fish.payara.server.internal.batch.payara-jbatch [311](R 311.0)] osgi.wiring.package; (osgi.wiring.package=org.glassfish.weld) [caused by: Unable to resolve fish.payara.server.internal.web.weld-integration [372](R 372.0): missing requirement [fish.payara.server.internal.web.weld-integration [372](R 372.0)] osgi.wiring.package; (&(osgi.wiring.package=org.glassfish.web.deployment.descriptor)(version>=5.2020.0)(!(version>=6.0.0))) [caused by: Unable to resolve fish.payara.server.internal.web.glue [360](R 360.0): missing requirement [fish.payara.server.internal.web.glue [360](R 360.0)] osgi.wiring.package; (&(osgi.wiring.package=org.apache.catalina)(version>=5.2020.0)(!(version>=6.0.0))) [caused by: Unable to resolve fish.payara.server.internal.web.core [358](R 358.0): missing requirement [fish.payara.server.internal.web.core [358](R 358.0)] osgi.wiring.package; (&(osgi.wiring.package=org.glassfish.web.loader)(version>=5.2020.0)(!(version>=6.0.0)))]]]] Unresolved requirements: [[fish.payara.server.internal.batch.glassfish-batch-connector [102](R 102.0)] osgi.wiring.package; (osgi.wiring.package=com.ibm.jbatch.spi)]
at org.apache.felix.framework.Felix.resolveBundleRevision(Felix.java:4368)
at org.apache.felix.framework.Felix.startBundle(Felix.java:2281)
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.startBundle(OSGiModuleImpl.java:227)
at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.start(OSGiModuleImpl.java:185)
... 33 more
Completed shutdown of GlassFish runtime
We are in non-embedded mode, so fish.payara.server.internal.core.glassfish [113] has nothing to do.
The error "Unknown protocol: RFB" is coming from the Hazelcast component, which is trying to discover other cluster instances that could be running on port 5900. In some operating systems, very often on Mac, this port is occupied by VNC (remote desktop), which responds to Payara Server in an unexpected way.
There's a solution covered for Payara Enterprise users in the Payara Knowledge Base. I have access to it and will copy the relevant parts from it here.
There are various solutions possible:
Stop the process that occupies the port 5900 (e.g. VNC, which uses that port by default). Alternatively, you can change its port. Payara Server should then start OK.
Configure Payara Server to use a different port for Hazelcast. If you run Payara Server according to the above solution, you can then run command: asadmin set-hazelcast-configuration --startport=5901.
Directly edit the domain.xml in the directory glassfish/domains/domain1/config and change the port 5900 to something else. Then run Payara Server as usual.
Or change the Hazelcast port of Payara Server at startup. First, create a text file config.txt with one line set-hazelcast-configuration --startport=5901. Then start Payara Server with asdamin start-domain --postbootcommandfile config.txt. More on this in the documentation: https://docs.payara.fish/community/docs/5.2020.4/documentation/payara-micro/asadmin/pre-and-post-boot-scripts.html

Impala Daemon Not Starting in cloudera-quickstart-vm-5.10

Impala not working in cloudera-quickstart-vm-5.10 as Impala Daemon is not starting.
Following is the status of the three required roles for Impala:
Impala Daemon is down
Impala Catalog Server has bad health &
Impala StateStore has good health
Getting following error messages:
Jul 22, 9:57:33.262 AM ERROR impala-server.cc:252
Could not read the root directory at hdfs://quickstart.cloudera:8020. Error was: Call From quickstart.cloudera/127.0.0.1 to quickstart.cloudera:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Jul 22, 9:57:33.262 AM ERROR impala-server.cc:255
Aborting Impala Server startup due to improper configuration. Impalad exiting.
Tried restarting the service through command line.
[cloudera#quickstart ~]$ sudo service impala-server restart
Stopped Impala Server: [ OK ]
Started Impala Server (impalad): [ OK ]
[cloudera#quickstart ~]$ service impala-server status
Impala Server is dead and pid file exists [FAILED]
Still it is not working.
Not sure if I can do some configuration changes for it or not. If so, could not find what exactly can be changed for it to work.

Connection refused error in worker logs - apache storm

I see the below error in worker logs, it happens almost every milliseconds, but the cluster is running fine, I wanted to know what does these error mean and any idea on why this would occur.
This happens on all the worker nodes
2016-05-12T15:32:53.514-0500 b.s.m.n.Client [ERROR] connection attempt 3 to Netty-Client-xxxxx.hq.abc.com/xx.xx.xxx.xx:6700 failed: java.net.ConnectException: Connection refused: xxxxx.hq.abc.com/xx.xx.xxx.xxx:6700
And after some time i see this
2016-05-12T15:44:25.940-0500 b.s.m.n.Client [ERROR] discarding 1 messages because the Netty client to Netty-Client-xxxxx.hq.abc.com/xx.xx.xxx.xxx:6700 is being closed
After struggling forever with this problem, I found that setting storm.local.hostname property in the storm.yaml file solved the issue for me. On my laptop, I set storm.zookeeper.servers, nimbus.host and storm.local.hostname all to "localhost".
I am using version 0.10.2 of storm.

Remotely connect to spark on yarn cluster in client mode

I have a remote spark on yarn cluster that if I use rstudio server(web version) hosted on that cluster to connect in client mode I can do the following:
sc <- SparkR::sparkR.init(master = "yarn-client")
However if I try to use rstudio on my local machine to connect to that spark cluster the same way then I have errors:
ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master
...
ERROR Utils: Uncaught exception in thread nioEventLoopGroup-2-2
java.lang.NullPointerException
...
ERROR RBackendHandler: createSparkContext on org.apache.spark.api.r.RRDD failed
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
A more detailed error message on hadoop application tracking page is like this:
User: blueivy
Name: SparkR
Application Type: SPARK
Application Tags:
State: FAILED
FinalStatus: FAILED
Started: 27-Oct-2015 11:07:09
Elapsed: 4mins, 39sec
Tracking URL: History
Diagnostics:
Application application_1445628650748_0027 failed 2 times due to AM Container for appattempt_1445628650748_0027_000002 exited with exitCode: 10
For more detailed output, check application tracking page:http://master:8088/proxy/application_1445628650748_0027/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1445628650748_0027_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:618)
at java.lang.Thread.run(Thread.java:785)
Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.
I have the same configurations and environment for hadoop and spark with remote cluster: spark 1.5.1, hadoop 2.6.0 and ubuntu 14.04. Anyone can help me find what's my mistake here?

Starting Neo4j Server failed: org/apache/juli/logging/LogFactory

After installing the Neo4J Community on Windows 8, and starting up, I get the following error:
Starting Neo4j Server failed: org/apache/juli/logging/LogFactory
What could be causing this error?

Resources