Storm worker not starting - apache-storm

I am trying to storm a storm topology but the storm worker refuses to start when I try to run the java command which invokes the worker process I get the following error:
Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread "main"
I am not able to find what problem is causing this. Has anyone faced similar issue
Edit:
when I runt the worker process with flag -V I get the following error:
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler=<NA>
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=3.5.0-23-generic
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=storm
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/storm
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/home/storm/storm-0.9.0.1
797 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
PS: When I run the same topology in local cluster it works fine, only when i deploy in cluster mode it doesnt start.

Just found out the issue. The jar I creted to upload in the storm cluster, was kept in the storm base directory pics. This somehow was creating conflict which was not shown in the log file and actually log file never got created.
Make sure no external jars are present in the base storm folder from where one start storm. Really tricky error no idea why this happens until you just get around it.
Hope the storm guys add this into the logs so that user facing such issue can pinpoint why exactly this is happening.

Related

Apache Nifi not starting on Windows 7 - because "extensionMapping" is null

I am just a beginner and trying to install NiFi for first time. I am having problem getting it working.
I have Windows 7 32 bit. I have installed LibericaJDK-19 just to get Nifi working. Else it was not working with java 8 as well for me.
I am trying nifi-1.19.1. conf file is default. I have set JAVA_HOME to point to new JDK - 19.
Last lines of nifi-bootstrap.log
2023-01-18 21:51:13,373 INFO [NiFi Bootstrap Command Listener] org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for Bootstrap requests on port 51200
2023-01-18 21:53:44,866 ERROR [NiFi logging handler] org.apache.nifi.StdErr Failed to start web server: Cannot invoke "org.apache.nifi.nar.ExtensionMapping.size()" because "extensionMapping" is null
2023-01-18 21:53:44,866 ERROR [NiFi logging handler] org.apache.nifi.StdErr Shutting down...
2023-01-18 21:53:45,515 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi never started. Will not restart NiFi
At start of nifi-app.log 
I get the warning
WARN [main] org.apache.nifi.nar.NarUnpacker Unable to load NAR library bundles due to java.util.zip.ZipException: zip END header not found Will proceed without loading any further Nar bundles
java.util.zip.ZipException: zip END header not found
at java.base/java.util.zip.ZipFile$Source.findEND(ZipFile.java:1483)
at java.base/java.util.zip.ZipFile$Source.initCEN(ZipFile.java:1491)
at java.base/java.util.zip.ZipFile$Source.<init>(ZipFile.java:1329)
at java.base/java.util.zip.ZipFile$Source.get(ZipFile.java:1292)
at java.base/java.util.zip.ZipFile$CleanableResource.<init>(ZipFile.java:710)
at java.base/java.util.zip.ZipFile.<init>(ZipFile.java:243)
at java.base/java.util.zip.ZipFile.<init>(ZipFile.java:172)
at java.base/java.util.jar.JarFile.<init>(JarFile.java:345)
at java.base/java.util.jar.JarFile.<init>(JarFile.java:316)
at java.base/java.util.jar.JarFile.<init>(JarFile.java:282)
at org.apache.nifi.nar.NarUnpacker.determineDocumentedNiFiComponents(NarUnpacker.java:605)
at org.apache.nifi.nar.NarUnpacker.unpackDocumentation(NarUnpacker.java:550)
at org.apache.nifi.nar.NarUnpacker.unpackBundleDocs(NarUnpacker.java:287)
at org.apache.nifi.nar.NarUnpacker.mapExtensions(NarUnpacker.java:271)
at org.apache.nifi.nar.NarUnpacker.unpackNars(NarUnpacker.java:220)
at org.apache.nifi.nar.NarUnpacker.unpackNars(NarUnpacker.java:89)
at org.apache.nifi.nar.NarUnpacker.unpackNars(NarUnpacker.java:83)
at org.apache.nifi.nar.NarUnpacker.unpackNars(NarUnpacker.java:74)
at org.apache.nifi.NiFi.<init>(NiFi.java:142)
at org.apache.nifi.NiFi.<init>(NiFi.java:83)
at org.apache.nifi.NiFi.main(NiFi.java:332)
and at the end of nifi-app.log
2023-01-18 21:53:44,866 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.NullPointerException: Cannot invoke "org.apache.nifi.nar.ExtensionMapping.size()" because "extensionMapping" is null
at org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:61)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:788)
at org.apache.nifi.NiFi.<init>(NiFi.java:172)
at org.apache.nifi.NiFi.<init>(NiFi.java:83)
at org.apache.nifi.NiFi.main(NiFi.java:332)
2023-01-18 21:53:44,889 INFO [Thread-0] org.apache.nifi.NiFi Application Server shutdown started
2023-01-18 21:53:44,890 INFO [Thread-0] org.apache.nifi.NiFi Application Server shutdown completed

I can't start Apache Nifi

When I run run-nifi.bat it pops up for a split second but then auto closes. I don't really understand this, I just need it for a university class and it hadn't been properly explained, so I'm just trying on my own really.
I get this in my nifi-app.log:
2021-05-29 17:07:30,179 INFO [main] org.apache.nifi.NiFi Launching NiFi...
2021-05-29 17:07:30,450 INFO [main] org.apache.nifi.security.kms.CryptoUtils Determined default nifi.properties path to be 'D:\SYSTEM\Downloads\nifi-1.13.2-bin\nifi-1.13.2\.\conf\nifi.properties'
2021-05-29 17:07:30,454 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Loaded 188 properties from D:\SYSTEM\Downloads\nifi-1.13.2-bin\nifi-1.13.2\.\conf\nifi.properties
2021-05-29 17:07:30,465 INFO [main] org.apache.nifi.NiFi Loaded 188 properties
2021-05-29 17:07:30,705 INFO [main] org.apache.nifi.BootstrapListener Started Bootstrap Listener, Listening for incoming requests on port 63487
2021-05-29 17:07:30,711 ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi due to java.net.ConnectException: Connection refused: connect
java.net.ConnectException: Connection refused: connect
at java.base/sun.nio.ch.Net.connect0(Native Method)
at java.base/sun.nio.ch.Net.connect(Net.java:576)
at java.base/sun.nio.ch.Net.connect(Net.java:565)
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333)
at java.base/java.net.Socket.connect(Socket.java:645)
at java.base/java.net.Socket.connect(Socket.java:595)
at org.apache.nifi.BootstrapListener.sendCommand(BootstrapListener.java:102)
at org.apache.nifi.BootstrapListener.start(BootstrapListener.java:74)
at org.apache.nifi.NiFi.<init>(NiFi.java:102)
at org.apache.nifi.NiFi.<init>(NiFi.java:71)
at org.apache.nifi.NiFi.main(NiFi.java:303)
2021-05-29 17:07:30,712 INFO [Thread-0] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2021-05-29 17:07:30,712 INFO [Thread-0] org.apache.nifi.NiFi Jetty web server shutdown completed (nicely or otherwise).
I've tried editing the web properties in the config files in case the default was wrong. Right now it's on, but the errors are the same:
nifi.web.http.host=localhost
nifi.web.http.port=9090
nifi.web.http.network.interface.default=
I have Windows 10 Home Edition.
NiFi requires Java 8 or Java 11 to run. So your environment variables should point to the correct directory with Java 8 or Java 11.
Have you tried setting the JAVA_HOME environment variable? I would recommend checking the config files and telling the configs where to find the Java installation
You might be missing URL ACL
Maybe you can try below command:
netsh http add urlacl url=http://computername:port/ user=username
Source: https://serverfault.com/a/246798/191420

Launch Neo4j 2.3.2 from the commandline

I have installed Neo4j 2.3.2 Community Edition on Mac OS 10.10. I can launch the application and connect to it from localhost:7474/browser/. So far, so good.
I would like to launch Neo4j 2.3.2 from a Terminal window, so that I don't have the overhead of a windowed application running at the same time. When I run the following command...
$ ~/neo4j/bin/neo4j console
... I get this output in the Terminal window:
WARNING: Max 256 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Starting Neo4j Server console-mode...
Unable to find any JVMs matching version "1.7".
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow -XX:hashCode=5 -Dneo4j.ext.udc.source=tarball
2016-02-25 14:03:18.755+0000 INFO [API] Setting startup timeout to: 120000ms based on 120000
2016-02-25 14:03:58.356+0000 INFO [API] Successfully started database
2016-02-25 14:04:04.220+0000 INFO [API] Starting HTTP on port :7474 with 2 threads available
2016-02-25 14:04:13.512+0000 INFO [API] Enabling HTTPS on port :7473
09:04:20.201 [main] INFO org.eclipse.jetty.util.log - Logging initialized #98517ms
2016-02-25 14:04:23.034+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html]
2016-02-25 14:04:25.785+0000 INFO [API] Mounting static content at [/browser] from [browser]
09:04:25.993 [main] INFO org.eclipse.jetty.server.Server - jetty-9.2.4.v20141103
09:04:26.722 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.h.MovedContextHandler#1611ba2{/,null,AVAILABLE}
09:04:27.794 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /webadmin, did not find org.apache.jasper.servlet.JspServlet
09:04:27.981 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#132ea25{/webadmin,jar:file:/Users/james/neo4j/system/lib/neo4j-server-2.2.5-static-web.jar!/webadmin-html,AVAILABLE}
09:04:38.841 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#60bfaa02{/db/manage,null,AVAILABLE}
09:04:39.326 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#28e2e149{/db/data,null,AVAILABLE}
09:04:39.353 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet
09:04:39.355 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#78e6aa71{/browser,jar:file:/Users/james/neo4j/system/lib/neo4j-browser-2.2.5.jar!/browser,AVAILABLE}
09:04:39.536 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#4994d9ab{/,null,AVAILABLE}
09:04:39.745 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#2d19cf20{HTTP/1.1}{localhost:7474}
09:04:40.576 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#43c742c{SSL-HTTP/1.1}{localhost:7473}
09:04:40.577 [main] INFO org.eclipse.jetty.server.Server - Started #119058ms
2016-02-25 14:04:40.577+0000 INFO [API] Server started on: http://localhost:7474/
2016-02-25 14:04:40.590+0000 INFO [API] Remote interface ready and available at [http://localhost:7474/]
I have Java version 8, update 74 installed (build 1.8.0_74-b02), so I assume that I can ignore the warning Unable to find any JVMs matching version "1.7".
However, when I visit http://localhost:7474/ in Chrome Version 45.0.2454.85 (64-bit), I see three errors in the Developer Console: two files that fail to load and a subsequent script error.
localhost/:28 GET http://localhost:7474/browser/styles/68eddd94.main.css
localhost/:466 GET http://localhost:7474/browser/scripts/ded362b3.scripts.js
Uncaught Error: [$injector:modulerr] Failed to instantiate module neo4jApp due to:
Error: [$injector:nomod] Module 'neo4jApp' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument.
As a result, the Neo4j interface does not appear in the browser window.
Is it possible to run Neo4j 2.3.2 from the Terminal, and if so, what do I need to do to get http://localhost:7474/ to load correctly?
Shift-reload, or test in an incognito window.
Looks like a JS file mismatch due to aggressive browser caching.

While running a topology in storm we are getting error like this

While running a topology in storm we are getting error like this,
8983 [Thread-6] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl -
Starting
9144 [main] INFO **backtype.storm.daemon.nimbus** - Shutting down master
9199 [Thread-6-EventThread] INFO backtype.storm.zookeeper - Zookeeper state upd
ate: :connected:none
9241 [main] INFO backtype.storm.daemon.nimbus - Shut down master
9273 [Thread-6] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl -
Starting
9306 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - EndOfStreamException: Unable to read additional data from cli
ent sessionid 0x143af55728d0003, likely client has closed socket
9354 [main] INFO backtype.storm.daemon.supervisor - Shutting down c094c3b1-a378
-4c4f-af35-9278647c217a:4beddc09-4675-4fb9-8bdc-9cf5013ce9ca
9358 [main] INFO backtype.storm.daemon.supervisor - Shut down c094c3b1-a378-4c4
f-af35-9278647c217a:4beddc09-4675-4fb9-8bdc-9cf5013ce9ca
9361 [main] INFO **backtype.storm.daemon.superviso**r - Shutting down supervisor c0
94c3b1-a378-4c4f-af35-9278647c217a
9364 [Thread-5] INFO **backtype.storm.event** - Event manager interrupted
9369 [Thread-6] INFO backtype.storm.event - Event manager interrupted
9425 [main] INFO **backtype.storm.daemon.supervisor** - Shutting down supervisor 38
6d8d71-c9b5-4b51-bd6e-f9f605034ea0
9428 [Thread-8] INFO backtype.storm.event - Event manager interrupted
9429 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - EndOfStreamException: Unable to read additional data from cli
ent sessionid 0x143af55728d0007, likely client has closed socket
9429 [Thread-9] INFO backtype.storm.event - Event manager interrupted
9473 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - EndOfStreamException: Unable to read additional data from cli
ent sessionid 0x143af55728d0009, likely client has closed socket
9476 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
9503 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - Ignoring exception
**java.nio.channels.ClosedChannelException**: null
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.jav
a:211) ~[na:1.7.0_03]
at org.apache.zookeeper.server.NIOServerCnxn$Factory.run(NIOServerCnxn.j
ava:242) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
9510 [main] INFO **backtype.storm.testing** - Done shutting down in process zookeep
er
9513 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\c9b1bc1a-a950-4098-af77-f81a4d2b112f
9520 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\7e75c468-18ea-4787-a4ac-496fb108db71
9527 [main] INFO backtype.storm.testing - Unable to delete file: C:\Users\sowmi
ya\AppData\Local\Temp\7e75c468-18ea-4787-a4ac-496fb108db71\version-2\log.1
9529 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\fa7b3c9b-ac93-4090-b9e2-63f10019e61f
9543 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\55f1fd11-508e-43bb-b340-0d9b79f3af33
9579 [Thread-6-EventThread] INFO com.netflix.curator.framework.state.Connection
StateManager - State change: SUSPENDED
9580 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.Connec
tionStateManager - There are no ConnectionStateListeners registered.
9583 [Thread-6-EventThread] WARN backtype.storm.cluster - Received event :disco
nnected::none: with disconnected Zookeeper.
11232 [Thread-6-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnx
n - Session 0x143af55728d000b for server null, unexpected error, closing socket
connection and attempting reconnect
**java.net.ConnectException: Connection refused: no further information**
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_0
3]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701
) ~[na:1.7.0_03]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
~[zookeeper-3.3.3.jar:3.3.3-1073969]
13992 [Thread-6-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnx
n - Session 0x143af55728d000b for server null, unexpected error, closing socket
connection and attempting reconnect
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_0
3]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701
) ~[na:1.7.0_03]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
Whwn we are trying to run the topology jar file all the operation like nimbus,zookeeper and supervisor process going to dead.please help us to know why this is happened.
Please help us to rectify this error and help to proceed further.
Thank you,
Sowmiya
Priya
This looks like a zookeeper issue. It looks like your processes are not being able to connect to zookeeper. Can't say more without more information.

Request Apache Accumulo error help using Hadoop and Zookeeper in a test environment

I am setting up a test environment using apache accumulo ver 1.4.0, hadoop ver 0.20.2 and zookeeper ver 3.3.3. Please see below for the issue.
Hadoop and Zookeeper work great together, but when I start accumulo using the procedures on apache incubator I get the following zookeeper stream of info and a warn:
2011-12-08 20:13:56,601 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /home/hadoop/zookeeper-3.3.3/bin/../conf/zoo.cfg
2011-12-08 20:13:56,603 - WARN [main:QuorumPeerMain#105] - Either no config or no quorum defined in config, running in standalone mode
2011-12-08 20:13:56,616 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /home/hadoop/zookeeper-3.3.3/bin/../conf/zoo.cfg
2011-12-08 20:13:56,617 - INFO [main:ZooKeeperServerMain#94] - Starting server
2011-12-08 20:13:56,626 - INFO [main:Environment#97] - Server environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
2011-12-08 20:13:56,627 - INFO [main:Environment#97] - Server environment:host.name.paz
2011-12-08 20:13:56,627 - INFO [main:Environment#97] - Server environment:java.version=1.6.0_26
2011-12-08 20:13:56,628 - INFO [main:Environment#97] - Server environment:java.vendor=Sun Microsystems Inc.
2011-12-08 20:13:56,629 - INFO [main:Environment#97] - Server environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
2011-12-08 20:13:56,629 - INFO [main:Environment#97] - Server environment:java.class.path=/home/hadoop/zookeeper-3.3.3/bin/../build/classes:/home/hadoop/zookeeper-3.3.
3/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.3.3/bin/../zookeeper-3.3.3.jar:/home/hadoop/zookeeper-3.3.3/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.3.3/b
in/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.3.3/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.3.3/bin/../conf:
2011-12-08 20:13:56,630 - INFO [main:Environment#97] - Server environment:java.library.path=/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/i386/server:/usr/lib/jvm/java-6-su
n-1.6.0.26/jre/lib/i386:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
2011-12-08 20:13:56,630 - INFO [main:Environment#97] - Server environment:java.io.tmpdir=/tmp
2011-12-08 20:13:56,631 - INFO [main:Environment#97] - Server environment:java.compiler=<NA>
2011-12-08 20:13:56,631 - INFO [main:Environment#97] - Server environment:os.name=Linux
2011-12-08 20:13:56,632 - INFO [main:Environment#97] - Server environment:os.arch=i386
2011-12-08 20:13:56,633 - INFO [main:Environment#97] - Server environment:os.version=3.0.0-13-generic
2011-12-08 20:13:56,633 - INFO [main:Environment#97] - Server environment:user.name=hadoop
2011-12-08 20:13:56,634 - INFO [main:Environment#97] - Server environment:user.home=/home/hadoop
2011-12-08 20:13:56,634 - INFO [main:Environment#97] - Server environment:user.dir=/home/hadoop
2011-12-08 20:13:56,641 - INFO [main:ZooKeeperServer#663] - tickTime set to 2000
2011-12-08 20:13:56,641 - INFO [main:ZooKeeperServer#672] - minSessionTimeout set to -1
2011-12-08 20:13:56,642 - INFO [main:ZooKeeperServer#681] - maxSessionTimeout set to -1
2011-12-08 20:13:56,661 - INFO [main:NIOServerCnxn$Factory#143] - binding to port 0.0.0.0/0.0.0.0:2181
2011-12-08 20:13:56,691 - INFO [main:FileSnap#82] - Reading snapshot /home/hadoop/zoo/dataDir/version-2/snapshot.0
2011-12-08 20:13:56,708 - INFO [main:FileTxnSnapLog#208] - Snapshotting: 4e
2011-12-08 20:14:52,147 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#251] - Accepted socket connection from /0:0:0:0:0:0:0:1:40694
2011-12-08 20:14:52,153 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#777] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:40694
2011-12-08 20:14:52,154 - INFO [SyncThread:0:FileTxnLog#197] - Creating new log file: log.4f
2011-12-08 20:14:52,410 - INFO [SyncThread:0:NIOServerCnxn#1580] - Established session 0x13420623ee70000 with negotiated timeout 30000 for client /0:0:0:0:0:0:0:1:4069
4
2011-12-08 20:14:52,959 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#251] - Accepted socket connection from /127.0.0.1:38446
2011-12-08 20:14:52,962 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#777] - Client attempting to establish new session at /127.0.0.1:38446
2011-12-08 20:14:53,007 - INFO [SyncThread:0:NIOServerCnxn#1580] - Established session 0x13420623ee70001 with negotiated timeout 30000 for client /127.0.0.1:38446
2011-12-08 20:14:59,932 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#634] - EndOfStreamException: Unable to read additional data from client session
id 0x13420623ee70000, likely client has closed socket
and when I start the accumulo shell I get the following errors:
18 12:44:38,746 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:38,846 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:38,947 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,048 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,148 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,249 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,350 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,450 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
Corrected tserver memory settings to not exceed waht is allowed by the JVM. Tserver does not crash and error resolved.
Answer came from accumulo-incubator user list and is reposted at the bottom.
Basically, when going in and modifying memory settings to run in pseudo-distributed mode on a laptop I made incorrect modifications to the accumulo-site.xml and accumulo-env.sh files concerning the tablet server memory usage. The clue to the error was found in the /home/hadoop/accumulo/logs/tserver*.log file:
20 18:20:00,951 [tabletserver.NativeMap] ERROR: Failed to load native
map library
/home/hadoop/accumulo/lib/native/map/libNativeMap-Linux-i386-32.so
java.lang.UnsatisfiedLinkError: Can't load library:
/home/hadoop/accumulo/lib/native/map/libNativeMap-Linux-i386-32.so
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1706)
at java.lang.Runtime.load0(Runtime.java:770)
at java.lang.System.load(System.java:1003)
at
org.apache.accumulo.server.tabletserver.NativeMap.loadNativeLib(NativeMap.java:144)
at
org.apache.accumulo.server.tabletserver.NativeMap.<clinit>(NativeMap.java:156)
at
org.apache.accumulo.server.tabletserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:123)
at
org.apache.accumulo.server.tabletserver.TabletServer.config(TabletServer.java:2959)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3085)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:662)
20 18:20:00,999 [tabletserver.TabletServer] ERROR: Uncaught exception in
TabletServer.main, exiting
java.lang.IllegalArgumentException: Maximum tablet server map memory
134,217,728 and block cache sizes 186,646,528 is too large for this JVM
configuration 132,579,328
at
org.apache.accumulo.server.tabletserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:134)
at
org.apache.accumulo.server.tabletserver.TabletServer.config(TabletServer.java:2959)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3085)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:662)
Specific help text:
"The native libraries are not loading, which is shifting in-memory
map into the java workspace. Add to that your block cache size, and your
specifications for memory use are higher than the JVM will be allowed to
allocate. The tablet servers complain, and exit. You should see these
complaints on the accumulo monitor web pages.
You may find a benefit to rebuilding the native map library, which will
move the allocation of the in-memory map to outside the JVM. This is
not required.
The size of any memory dedicated to cache must be smaller than the size
of the JVM, which must include substantial working space for RPC calls
and garbage collection over time."
If you run the Zookeeper client (/home/hadoop/zookeeper-3.3.3/bin/zkCli.sh) and do an ls of
/accumulo/<instance uuid>/tservers
I assume you won't see any servers listed. You should see one or more tablet servers listed if Accumulo was initialized properly. Are you sure you ran the Accumulo init script after setting your Zookeeper servers in the accumulo-site.xml per the instructions?
Make sure you set "instance.zookeeper.host" to the location of your zookeeper node in the accumulo-site.xml file.
Additionally, check the logs for your tservers and loggers. If you have other configuration issues, they will not go live, which will cause the master to report having difficulties finding tservers.
Go into $ACCUMULO_HOME/conf . End the files masters,slaves and tracers to contain one single line that reads "localhost" (I assume you are doing single node)

Resources