mesos slave on different host unable to add itself - mesos

Mesos slave is unable to add itself to the cluster. Right now i have 3 machine, with 3 slaves running and 1 master.
But at the mesos page i can see just one master and one slave (same as the master's host present). I can see the marathon running, app etc..
But just the other slaves are unable to connect to the master.
slave logs ::
I0825 21:30:00.971642 4110 slave.cpp:4193] Received oversubscribable resources from the resource estimator
I0825 21:30:01.000732 4106 group.cpp:313] Group process (group(1)#127.0.1.1:5051) connected to ZooKeeper
I0825 21:30:01.000821 4106 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0825 21:30:01.000874 4106 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0825 21:30:01.007753 4106 detector.cpp:138] Detected a new leader: (id='9')
I0825 21:30:01.008038 4106 group.cpp:656] Trying to get '/mesos/info_0000000009' in ZooKeeper
W0825 21:30:01.020577 4106 detector.cpp:444] Leading master master#127.0.1.1:5050 is using a Protobuf binary format when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0825 21:30:01.021152 4106 detector.cpp:481] A new leading master (UPID=master#127.0.1.1:5050) is detected
I0825 21:30:01.021353 4106 status_update_manager.cpp:176] Pausing sending status updates
I0825 21:30:01.021385 4105 slave.cpp:684] New master detected at master#127.0.1.1:5050
I0825 21:30:01.022073 4105 slave.cpp:709] No credentials provided. Attempting to register without authentication
E0825 21:30:01.022299 4113 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
zookeeer on master ::
ls /mesos
[info_0000000009, info_0000000010, log_replicas]
ls /mesos/info_0000000009
[]
Please note the lines in slave logs :
Trying to get '/mesos/info_0000000009' in ZooKeeper
and then why slave assumes the master as 127.0.1.1:5050 .. i never specified that
Leading master master#127.0.1.1:5050
but zookeeper returns
ls /mesos/info_0000000009
[]
looked into master's zookeeper and found that it was not set at all.. is t a bug in mesos or ki am missing some configuration..
also, the zookeeper logs on master closed the client connection(may now client started to connect to some other master)
2015-08-25 21:30:01,882 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxn#349] - caught
end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x14f657dafeb000d, likely cl
ient has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
2015-08-25 21:30:01,884 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxn#1001] - Closed
socket connection for client /192.168.0.3:53125 which had sessionid 0x14f657dafeb000d
2015-08-25 21:30:01,952 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] -
Accepted socket connection from /192.168.0.3:53166
Note : slave on the same host as the master works perfectly fine.
TRYING TO RESOLVE IT OVER MORE THAN 2 DAYS NOW .. PLEASEE HELP..
Looks like a bug to me .. where can i see the current master in zookeeper .. is it something like /mesos/info_0000000009 ? but i was getting the in zookeeper
ls /mesos/info_0000000009
[]
an empty array thr .. is this correct because from client logs were trying to look for this : ...
I0825 21:30:01.008038 4106 group.cpp:656] Trying to get '/mesos/info_0000000009' in ZooKeeper
W0825 21:30:01.020577 4106 detector.cpp:444] Leading master master#127.0.1.1:5050 is using a Protobuf binary format when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0825 21:30:01.021152 4106 detector.cpp:481] A new leading master (UPID=master#127.0.1.1:5050) is detected
and then client tries for 127.0.1.1:5050
Here is the complete slave logs:
Log file created at: 2015/08/27 07:12:56
Running on machine: vvwslave1
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0827 07:12:56.406455 1303 logging.cpp:172] INFO level logging started!
I0827 07:12:56.438398 1303 main.cpp:162] Build: 2015-07-24 10:05:39 by root
I0827 07:12:56.438534 1303 main.cpp:164] Version: 0.23.0
I0827 07:12:56.438634 1303 main.cpp:167] Git tag: 0.23.0
I0827 07:12:56.438733 1303 main.cpp:171] Git SHA: 4ce5475346a0abb7ef4b7ffc9836c5836d7c7a66
I0827 07:12:56.510270 1303 containerizer.cpp:111] Using isolation: posix/cpu,posix/mem
I0827 07:12:56.566021 1329 group.cpp:313] Group process (group(1)#127.0.1.1:5051) connected to ZooKeeper
I0827 07:12:56.566082 1329 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:56.566108 1329 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0827 07:12:56.571959 1303 main.cpp:249] Starting Mesos slave
I0827 07:12:56.587656 1303 slave.cpp:190] Slave started on 1)#127.0.1.1:5051
I0827 07:12:56.587723 1303 slave.cpp:191] Flags at startup: --authenticatee="crammd5" --cgroups_cpu_enable_pids_and
_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" -
-cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --default_role="*" --disk_wa
tch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_remove_delay="6hrs" --docker_sandbox_di
rectory="/mnt/mesos/sandbox" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --enforce_container_
disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_
dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
--hadoop_home="" --help="false" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher_dir=
"/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://192.168.0.2:2
281/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="505
1" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registrat
ion_backoff_factor="1secs" --resource_monitoring_interval="1secs" --revocable_cpu_low_priority="true" --strict="true
" --switch_user="true" --version="false" --work_dir="/tmp/mesos"
I0827 07:12:56.592327 1303 slave.cpp:354] Slave resources: cpus(*):2; mem(*):979; disk(*):67653; ports(*):[31000-32
000]
I0827 07:12:56.592576 1303 slave.cpp:384] Slave hostname: vvwslave1
I0827 07:12:56.592608 1303 slave.cpp:389] Slave checkpoint: true
I0827 07:12:56.633998 1330 state.cpp:36] Recovering state from '/tmp/mesos/meta'
I0827 07:12:56.644068 1330 status_update_manager.cpp:202] Recovering status update manager
I0827 07:12:56.644907 1330 containerizer.cpp:316] Recovering containerizer
I0827 07:12:56.650073 1330 slave.cpp:4026] Finished recovery
I0827 07:12:56.650527 1330 slave.cpp:4179] Querying resource estimator for oversubscribable resources
I0827 07:12:56.650653 1330 slave.cpp:4193] Received oversubscribable resources from the resource estimator
I0827 07:12:56.657416 1329 detector.cpp:138] Detected a new leader: (id='14')
I0827 07:12:56.657564 1329 group.cpp:656] Trying to get '/mesos/info_0000000014' in ZooKeeper
W0827 07:12:56.659080 1329 detector.cpp:444] Leading master master#127.0.1.1:5050 is using a Protobuf binary format
when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0827 07:12:56.677889 1329 detector.cpp:481] A new leading master (UPID=master#127.0.1.1:5050) is detected
I0827 07:12:56.677989 1329 slave.cpp:684] New master detected at master#127.0.1.1:5050
I0827 07:12:56.678146 1326 status_update_manager.cpp:176] Pausing sending status updates
I0827 07:12:56.678195 1329 slave.cpp:709] No credentials provided. Attempting to register without authentication
I0827 07:12:56.678239 1329 slave.cpp:720] Detecting new master
I0827 07:12:56.678591 1329 slave.cpp:3087] master#127.0.1.1:5050 exited
W0827 07:12:56.678702 1329 slave.cpp:3090] Master disconnected! Waiting for a new master to be elected
E0827 07:12:56.678460 1332 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
E0827 07:12:57.068922 1332 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
E0827 07:12:58.829129 1332 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
And the complete zookeeper logs running on master on master
2015-08-27 07:12:42,672 - INFO [main:QuorumPeerConfig#101] - Reading configuration from: /etc/zookeeper/conf/zoo.cf
g
2015-08-27 07:12:42,718 - ERROR [main:QuorumPeerConfig#283] - Invalid configuration, only one server specified (igno
ring)
2015-08-27 07:12:42,720 - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 10
2015-08-27 07:12:42,720 - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
2015-08-27 07:12:42,721 - INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
2015-08-27 07:12:42,721 - WARN [main:QuorumPeerMain#113] - Either no config or no quorum defined in config, running
in standalone mode
2015-08-27 07:12:42,741 - INFO [main:QuorumPeerConfig#101] - Reading configuration from: /etc/zookeeper/conf/zoo.cf
g
2015-08-27 07:12:42,765 - ERROR [main:QuorumPeerConfig#283] - Invalid configuration, only one server specified (igno
ring)
2015-08-27 07:12:42,765 - INFO [main:ZooKeeperServerMain#95] - Starting server
2015-08-27 07:12:42,776 - INFO [main:Environment#100] - Server environment:zookeeper.version=3.4.5--1, built on 06/
10/2013 17:26 GMT
2015-08-27 07:12:42,776 - INFO [main:Environment#100] - Server environment:host.name=vvw
2015-08-27 07:12:42,776 - INFO [main:Environment#100] - Server environment:java.version=1.7.0_79
2015-08-27 07:12:42,776 - INFO [main:Environment#100] - Server environment:java.vendor=Oracle Corporation
2015-08-27 07:12:42,777 - INFO [main:Environment#100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
2015-08-27 07:12:42,777 - INFO [main:Environment#100] - Server environment:java.class.path=/etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/netty.jar:/usr/share/java/slf4j-api.jar:/usr/share/java/slf4j-log4j12.jar:/usr/share/java/zookeeper.jar
2015-08-27 07:12:42,777 - INFO [main:Environment#100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2015-08-27 07:12:42,779 - INFO [main:Environment#100] - Server environment:java.io.tmpdir=/tmp
2015-08-27 07:12:42,779 - INFO [main:Environment#100] - Server environment:java.compiler=<NA>
2015-08-27 07:12:42,779 - INFO [main:Environment#100] - Server environment:os.name=Linux
2015-08-27 07:12:42,779 - INFO [main:Environment#100] - Server environment:os.arch=amd64
2015-08-27 07:12:42,780 - INFO [main:Environment#100] - Server environment:os.version=3.19.0-25-generic
2015-08-27 07:12:42,780 - INFO [main:Environment#100] - Server environment:user.name=zookeeper
2015-08-27 07:12:42,780 - INFO [main:Environment#100] - Server environment:user.home=/var/lib/zookeeper
2015-08-27 07:12:42,780 - INFO [main:Environment#100] - Server environment:user.dir=/
2015-08-27 07:12:42,789 - INFO [main:ZooKeeperServer#726] - tickTime set to 2000
2015-08-27 07:12:42,789 - INFO [main:ZooKeeperServer#735] - minSessionTimeout set to -1
2015-08-27 07:12:42,789 - INFO [main:ZooKeeperServer#744] - maxSessionTimeout set to -1
2015-08-27 07:12:42,806 - INFO [main:NIOServerCnxnFactory#94] - binding to port 0.0.0.0/0.0.0.0:2281
2015-08-27 07:12:42,826 - INFO [main:FileSnap#83] - Reading snapshot /var/lib/zookeeper/version-2/snapshot.705
2015-08-27 07:12:42,859 - INFO [main:FileTxnSnapLog#240] - Snapshotting: 0x728 to /var/lib/zookeeper/version-2/snap
shot.728
2015-08-27 07:12:44,848 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted sock
et connection from /192.168.0.2:44500
2015-08-27 07:12:44,857 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request
from old client /192.168.0.2:44500; will be dropped if server is in r-o mode
2015-08-27 07:12:44,859 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting
to establish new session at /192.168.0.2:44500
2015-08-27 07:12:44,862 - INFO [SyncThread:0:FileTxnLog#199] - Creating new log file: log.729
2015-08-27 07:12:45,299 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10000 with nego
tiated timeout 10000 for client /192.168.0.2:44500
2015-08-27 07:12:45,505 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted sock
et connection from /192.168.0.2:44501
2015-08-27 07:12:45,506 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request
from old client /192.168.0.2:44501; will be dropped if server is in r-o mode
2015-08-27 07:12:45,506 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting
to establish new session at /192.168.0.2:44501
2015-08-27 07:12:45,509 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted sock
et connection from /192.168.0.2:44502
2015-08-27 07:12:45,510 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request
from old client /192.168.0.2:44502; will be dropped if server is in r-o mode
2015-08-27 07:12:45,510 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.2:44502
2015-08-27 07:12:45,538 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.2:44503
2015-08-27 07:12:45,538 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.2:44504
2015-08-27 07:12:45,538 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request from old client /192.168.0.2:44503; will be dropped if server is in r-o mode
2015-08-27 07:12:45,539 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.2:44503
2015-08-27 07:12:45,539 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request from old client /192.168.0.2:44504; will be dropped if server is in r-o mode
2015-08-27 07:12:45,539 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.2:44504
2015-08-27 07:12:45,564 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10001 with negotiated timeout 10000 for client /192.168.0.2:44501
2015-08-27 07:12:45,674 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10002 with negotiated timeout 10000 for client /192.168.0.2:44502
2015-08-27 07:12:45,675 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10003 with negotiated timeout 10000 for client /192.168.0.2:44503
2015-08-27 07:12:45,676 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10004 with negotiated timeout 10000 for client /192.168.0.2:44504
2015-08-27 07:12:46,183 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.2:44506
2015-08-27 07:12:46,189 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.2:44506
2015-08-27 07:12:46,232 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10005 with negotiated timeout 10000 for client /192.168.0.2:44506
2015-08-27 07:12:48,195 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.2:44508
2015-08-27 07:12:48,196 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.2:44508
2015-08-27 07:12:48,212 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10006 with negotiated timeout 40000 for client /192.168.0.2:44508
2015-08-27 07:12:49,872 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.2:44509
2015-08-27 07:12:49,873 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request from old client /192.168.0.2:44509; will be dropped if server is in r-o mode
2015-08-27 07:12:49,873 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.2:44509
2015-08-27 07:12:49,878 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10007 with negotiated timeout 10000 for client /192.168.0.2:44509
2015-08-27 07:12:56,161 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.3:60436
2015-08-27 07:12:56,161 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#793] - Connection request from old client /192.168.0.3:60436; will be dropped if server is in r-o mode
2015-08-27 07:12:56,161 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer#839] - Client attempting to establish new session at /192.168.0.3:60436
2015-08-27 07:12:56,189 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x14f6cd241e10008 with negotiated timeout 10000 for client /192.168.0.3:60436
And the logs from master node
I0827 07:12:45.412888 1604 leveldb.cpp:176] Opened db in 567.381081ms
I0827 07:12:45.469497 1604 leveldb.cpp:183] Compacted db in 56.508537ms
I0827 07:12:45.469674 1604 leveldb.cpp:198] Created db iterator in 21452ns
I0827 07:12:45.502590 1604 leveldb.cpp:204] Seeked to beginning of db in 32.834339ms
I0827 07:12:45.502900 1604 leveldb.cpp:273] Iterated through 3 keys in the db in 101809ns
I0827 07:12:45.503026 1604 replica.cpp:744] Replica recovered with log positions 73 -> 74 with 0 holes and 0 unlear
ned
I0827 07:12:45.507745 1643 log.cpp:238] Attempting to join replica to ZooKeeper group
I0827 07:12:45.507983 1643 recover.cpp:449] Starting replica recovery
I0827 07:12:45.508095 1643 recover.cpp:475] Replica is in VOTING status
I0827 07:12:45.508167 1643 recover.cpp:464] Recover process terminated
I0827 07:12:45.536058 1604 main.cpp:383] Starting Mesos master
I0827 07:12:45.559154 1604 master.cpp:368] Master 20150827-071245-16842879-5050-1604 (vvwmaster) started on 127.0.1
.1:5050
I0827 07:12:45.559239 1604 master.cpp:370] Flags at startup: --allocation_interval="1secs" --allocator="Hierarchica
lDRF" --authenticate="false" --authenticate_slaves="false" --authenticators="crammd5" --framework_sorter="drf" --hel
p="false" --hostname="vvwmaster" --initialize_driver_logging="true" --log_auto_initialize="true" --log_dir="/var/log
/mesos" --logbufsecs="0" --logging_level="INFO" --max_slave_ping_timeouts="5" --port="5050" --quiet="false" --quorum
="1" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_s
tore_timeout="5secs" --registry_strict="false" --root_submissions="true" --slave_ping_timeout="15secs" --slave_rereg
ister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/l
ib/mesos" --zk="zk://192.168.0.2:2281/mesos" --zk_session_timeout="10secs"
I0827 07:12:45.559460 1604 master.cpp:417] Master allowing unauthenticated frameworks to register
I0827 07:12:45.559491 1604 master.cpp:422] Master allowing unauthenticated slaves to register
I0827 07:12:45.559587 1604 master.cpp:459] Using default 'crammd5' authenticator
W0827 07:12:45.559619 1604 authenticator.cpp:504] No credentials provided, authentication requests will be refused.
I0827 07:12:45.559909 1604 authenticator.cpp:511] Initializing server SASL
I0827 07:12:45.564357 1642 group.cpp:313] Group process (group(1)#127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.564539 1642 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:45.564590 1642 group.cpp:385] Trying to create path '/mesos/log_replicas' in ZooKeeper
I0827 07:12:45.675650 1644 group.cpp:313] Group process (group(2)#127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.675717 1644 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0)
I0827 07:12:45.675750 1644 group.cpp:385] Trying to create path '/mesos/log_replicas' in ZooKeeper
I0827 07:12:45.676774 1639 group.cpp:313] Group process (group(3)#127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.676828 1639 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:45.676857 1639 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0827 07:12:45.678182 1640 group.cpp:313] Group process (group(4)#127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.678235 1640 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:45.678380 1640 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0827 07:12:45.809567 1645 network.hpp:415] ZooKeeper group memberships changed
I0827 07:12:45.816505 1644 group.cpp:656] Trying to get '/mesos/log_replicas/0000000013' in ZooKeeper
I0827 07:12:45.820705 1645 network.hpp:463] ZooKeeper group PIDs: { log-replica(1)#127.0.1.1:5050 }
I0827 07:12:46.020447 1644 contender.cpp:131] Joining the ZK group
I0827 07:12:46.020498 1639 master.cpp:1420] Successfully attached file '/var/log/mesos/mesos-master.INFO'
I0827 07:12:46.078451 1643 contender.cpp:247] New candidate (id='14') has entered the contest for leadership
I0827 07:12:46.078984 1645 detector.cpp:138] Detected a new leader: (id='14')
I0827 07:12:46.079110 1645 group.cpp:656] Trying to get '/mesos/info_0000000014' in ZooKeeper
W0827 07:12:46.084359 1645 detector.cpp:444] Leading master master#127.0.1.1:5050 is using a Protobuf binary format when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0827 07:12:46.084485 1645 detector.cpp:481] A new leading master (UPID=master#127.0.1.1:5050) is detected
I0827 07:12:46.084553 1645 master.cpp:1481] The newly elected leader is master#127.0.1.1:5050 with id 20150827-071245-16842879-5050-1604
I0827 07:12:46.084653 1645 master.cpp:1494] Elected as the leading master!
I0827 07:12:46.084682 1645 master.cpp:1264] Recovering from registrar
I0827 07:12:46.084812 1645 registrar.cpp:313] Recovering registrar
I0827 07:12:46.085160 1645 log.cpp:661] Attempting to start the writer
I0827 07:12:46.085683 1639 replica.cpp:477] Replica received implicit promise request with proposal 18
I0827 07:12:46.231271 1639 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 145.505945ms
I0827 07:12:46.231402 1639 replica.cpp:345] Persisted promised to 18
I0827 07:12:46.231667 1640 coordinator.cpp:230] Coordinator attemping to fill missing position
I0827 07:12:46.231801 1640 log.cpp:677] Writer started with ending position 74
I0827 07:12:46.232197 1646 leveldb.cpp:438] Reading position from leveldb took 60443ns
I0827 07:12:46.232319 1646 leveldb.cpp:438] Reading position from leveldb took 21312ns
I0827 07:12:46.232934 1646 registrar.cpp:346] Successfully fetched the registry (247B) in 148.019968ms
I0827 07:12:46.233131 1646 registrar.cpp:445] Applied 1 operations in 17888ns; attempting to update the 'registry'
I0827 07:12:46.234346 1640 log.cpp:685] Attempting to append 286 bytes to the log
I0827 07:12:46.234463 1640 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 75
I0827 07:12:46.234748 1645 replica.cpp:511] Replica received write request for position 75
I0827 07:12:46.274888 1645 leveldb.cpp:343] Persisting action (305 bytes) to leveldb took 40.044935ms
I0827 07:12:46.275140 1645 replica.cpp:679] Persisted action at 75
I0827 07:12:46.275503 1646 replica.cpp:658] Replica received learned notice for position 75
I0827 07:12:46.307917 1646 leveldb.cpp:343] Persisting action (307 bytes) to leveldb took 32.320539ms
I0827 07:12:46.308076 1646 replica.cpp:679] Persisted action at 75
I0827 07:12:46.308112 1646 replica.cpp:664] Replica learned APPEND action at position 75
I0827 07:12:46.308668 1646 registrar.cpp:490] Successfully updated the 'registry' in 75.472128ms
I0827 07:12:46.308749 1646 registrar.cpp:376] Successfully recovered registrar
I0827 07:12:46.308888 1646 log.cpp:704] Attempting to truncate the log to 75
I0827 07:12:46.309002 1646 master.cpp:1291] Recovered 1 slaves from the Registry (247B) ; allowing 10mins for slaves to re-register
I0827 07:12:46.309056 1646 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 76
I0827 07:12:46.309252 1646 replica.cpp:511] Replica received write request for position 76
I0827 07:12:46.352067 1646 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 42.749912ms
I0827 07:12:46.352377 1646 replica.cpp:679] Persisted action at 76
I0827 07:12:46.352900 1646 replica.cpp:658] Replica received learned notice for position 76
I0827 07:12:46.407814 1646 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 54.686166ms
I0827 07:12:46.408033 1646 leveldb.cpp:401] Deleting ~2 keys from leveldb took 50800ns
I0827 07:12:46.408068 1646 replica.cpp:679] Persisted action at 76
I0827 07:12:46.408102 1646 replica.cpp:664] Replica learned TRUNCATE action at position 76
I0827 07:12:46.884490 1644 master.cpp:3332] Registering slave at slave(1)#127.0.1.1:5051 (vvw) with id 20150827-071245-16842879-5050-1604-S0
I0827 07:12:46.900085 1644 registrar.cpp:445] Applied 1 operations in 43323ns; attempting to update the 'registry'
I0827 07:12:46.901564 1639 log.cpp:685] Attempting to append 440 bytes to the log
I0827 07:12:46.901736 1639 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 77
I0827 07:12:46.902035 1639 replica.cpp:511] Replica received write request for position 77
I0827 07:12:46.947882 1639 leveldb.cpp:343] Persisting action (459 bytes) to leveldb took 45.777578ms
I0827 07:12:46.948067 1639 replica.cpp:679] Persisted action at 77
I0827 07:12:46.948422 1639 replica.cpp:658] Replica received learned notice for position 77
I0827 07:12:46.992007 1639 leveldb.cpp:343] Persisting action (461 bytes) to leveldb took 43.518061ms
I0827 07:12:46.992187 1639 replica.cpp:679] Persisted action at 77
I0827 07:12:46.992249 1639 replica.cpp:664] Replica learned APPEND action at position 77
I0827 07:12:46.992826 1640 registrar.cpp:490] Successfully updated the 'registry' in 92.466176ms
I0827 07:12:46.992949 1639 log.cpp:704] Attempting to truncate the log to 77
I0827 07:12:46.993027 1639 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 78
I0827 07:12:46.993371 1639 replica.cpp:511] Replica received write request for position 78
I0827 07:12:46.993588 1640 master.cpp:3395] Registered slave 20150827-071245-16842879-5050-1604-S0 at slave(1)#127.0.1.1:5051 (vvw) with cpus(*):4; mem(*):1846; disk(*):141854; ports(*):[31000-32000]
I0827 07:12:46.993785 1644 hierarchical.hpp:528] Added slave 20150827-071245-16842879-5050-1604-S0 (vvw) with cpus(*):4; mem(*):1846; disk(*):141854; ports(*):[31000-32000] (allocated: )
I0827 07:12:47.018685 1641 master.cpp:3687] Received update of slave 20150827-071245-16842879-5050-1604-S0 at slave(1)#127.0.1.1:5051 (vvw) with total oversubscribed resources
I0827 07:12:47.018934 1641 hierarchical.hpp:588] Slave 20150827-071245-16842879-5050-1604-S0 (vvw) updated with oversubscribed resources (total: cpus(*):4; mem(*):1846; disk(*):141854; ports(*):[31000-32000], allocated: )
I0827 07:12:47.036170 1639 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 42.72315ms
I0827 07:12:47.036388 1639 replica.cpp:679] Persisted action at 78

"But at the mesos page i can see just one master and one slave (same as the master's host present)."
Most probably this happens because the master is not able to establish connection to agents (aka slaves) living on other machines. Right now (this may change with the new HTTP API), the master must be able to open a connection to an agent, which means an agent must report a non-local IP when to registers with the master. From your logs it looks like agents bind to local IPs (127.0.1.1). You can change that via --ip flag.

I have noticed that you are running mesos as a service, and I think there must be a configuration file where you should specify your master ip(or zookeeper ip) and the default value in the file is 127.0.1.1, so only your slave on the same machine with your master can connect to it. Because when running mesos-slave you must give it the master ip.

Related

Connection Refused when creating a two nodes cluster in Apache NiFi

I'm facing a problem regarding a refused connection on the cluster node protocol port.
I'm using the following configs to create the two nodes cluster:
For the First node manager :
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=10.129.140.22
nifi.web.http.port=3000
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=10000
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
For the second node slave:
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=9021
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=10001
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=10.129.140.22
nifi.cluster.load.balance.port=6343
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=10.129.140.22:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
The logs fils shows the following :
For the slave
2019-05-23 10:37:07,384 INFO [main] o.a.n.c.repository.FileSystemRepository Initializing FileSystemRepository with 'Always Sync' set to false
2019-05-23 10:37:07,541 INFO [main] o.apache.nifi.controller.FlowController Not enabling RAW Socket Site-to-Site functionality because nifi.remote.input.socket.port is not set
2019-05-23 10:37:07,546 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2019-05-23 10:37:07,591 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:07,658 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:07,693 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2019-05-23 10:37:07,697 INFO [main] o.apache.nifi.controller.FlowController The Election for Cluster Coordinator has already begun (Leader is localhost:10000). Will not register to be elected for this role until after connecting to the cluster and inheriting the cluster's flow.
2019-05-23 10:37:07,699 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:07,699 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2019-05-23 10:37:07,706 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:09,587 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1a6a4595{nifi-api,/nifi-api,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-api-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.9.2.war}
2019-05-23 10:37:09,850 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=77ms
2019-05-23 10:37:09,852 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,873 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#4b1b2255{nifi-content-viewer,/nifi-content-viewer,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-content-viewer-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.9.2.war}
2019-05-23 10:37:09,895 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:09,896 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2019-05-23 10:37:09,915 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,917 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#4965454c{nifi-docs,/nifi-docs,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-docs-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.9.2.war}
2019-05-23 10:37:09,936 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=8ms
2019-05-23 10:37:09,955 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,957 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1e4a4ed5{nifi-error,/,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
2019-05-23 10:37:09,967 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#4518bffd{HTTP/1.1,[http/1.1]}{0.0.0.0:9021}
2019-05-23 10:37:09,967 INFO [main] org.eclipse.jetty.server.Server Started #28769ms
2019-05-23 10:37:09,978 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2019-05-23 10:37:09,982 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 10001
2019-05-23 10:37:10,026 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow
2019-05-23 10:37:10,071 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: localhost:9021
2019-05-23 10:37:10,073 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:10,074 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10000 due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:12,715 WARN [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Failed to determine which node is elected active Cluster Coordinator: ZooKeeper reports the address as localhost:10000, but there is no node with this address. Attempted to determine the node's information but failed to retrieve its information due to org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:12,720 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for localhost:9021 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:12,721 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for localhost:9021 -- Requesting that node connect to cluster
2019-05-23 10:37:12,721 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of localhost:9021 changed from NodeConnectionStatus[nodeId=localhost:9021, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=1] to NodeConnectionStatus[nodeId=localhost:9021, state=CONNECTING, updateId=3]
2019-05-23 10:37:15,075 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:15,076 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10000 due to: java.net.ConnectException: Connection refused (Connection refused)
For the manager
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.io.tmpdir=/tmp
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.compiler=<NA>
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.name=Linux
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.arch=amd64
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.version=4.15.0-20-generic
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.name=root
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.home=/root
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.dir=/home/superman/nifi-1.9.2
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer tickTime set to 2000
2019-05-23 10:36:59,754 INFO [main] o.a.zookeeper.server.ZooKeeperServer minSessionTimeout set to -1
2019-05-23 10:36:59,754 INFO [main] o.a.zookeeper.server.ZooKeeperServer maxSessionTimeout set to -1
2019-05-23 10:36:59,855 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2019-05-23 10:36:59,903 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:36:59,950 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] o.a.zookeeper.server.ZooKeeperServer Client attempting to establish new session at /127.0.0.1:40388
2019-05-23 10:36:59,950 INFO [SyncThread:0] o.a.z.server.persistence.FileTxnLog Creating new log file: log.3c
2019-05-23 10:36:59,963 INFO [SyncThread:0] o.a.zookeeper.server.ZooKeeperServer Established session 0x16ae443f4130000 with negotiated timeout 4000 for client /127.0.0.1:40388
2019-05-23 10:36:59,975 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:36:59,998 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2019-05-23 10:37:00,003 INFO [main] o.apache.nifi.controller.FlowController The Election for Cluster Coordinator has already begun (Leader is localhost:10001). Will not register to be elected for this role until after connecting to the cluster and inheriting the cluster's flow.
2019-05-23 10:37:00,005 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:00,005 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2019-05-23 10:37:00,019 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] o.a.zookeeper.server.ZooKeeperServer Client attempting to establish new session at /127.0.0.1:40390
2019-05-23 10:37:00,020 INFO [SyncThread:0] o.a.zookeeper.server.ZooKeeperServer Established session 0x16ae443f4130001 with negotiated timeout 4000 for client /127.0.0.1:40390
2019-05-23 10:37:00,020 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:02,022 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1a05ff8e{nifi-api,/nifi-api,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-api-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.9.2.war}
2019-05-23 10:37:02,373 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=165ms
2019-05-23 10:37:02,375 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,401 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#251e2f4a{nifi-content-viewer,/nifi-content-viewer,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-content-viewer-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.9.2.war}
2019-05-23 10:37:02,419 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:02,420 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2019-05-23 10:37:02,421 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,441 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1abea1ed{nifi-docs,/nifi-docs,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-docs-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.9.2.war}
2019-05-23 10:37:02,457 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:02,475 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,478 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#6f5288c5{nifi-error,/,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
2019-05-23 10:37:02,488 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#167ed1cf{HTTP/1.1,[http/1.1]}{10.129.140.22:3000}
2019-05-23 10:37:02,488 INFO [main] org.eclipse.jetty.server.Server Started #26145ms
2019-05-23 10:37:02,500 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2019-05-23 10:37:02,503 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 10000
2019-05-23 10:37:02,545 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow
2019-05-23 10:37:02,587 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:02,589 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10001; will use this address for sending heartbeat messages
2019-05-23 10:37:02,590 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10001 due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:04,001 INFO [SessionTracker] o.a.zookeeper.server.ZooKeeperServer Expiring session 0x16ae42f180d0003, timeout of 4000ms exceeded
2019-05-23 10:37:04,001 INFO [SessionTracker] o.a.zookeeper.server.ZooKeeperServer Expiring session 0x16ae42f180d0002, timeout of 4000ms exceeded
2019-05-23 10:37:05,026 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:05,028 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:05,028 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=0] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5]
2019-05-23 10:37:07,591 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKeeper. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again
2019-05-23 10:37:07,594 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election.
2019-05-23 10:37:07,612 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener#1d6dcdcb This node has been elected Leader for Role 'Cluster Coordinator'
2019-05-23 10:37:07,612 INFO [Leader Election Notification Thread-1] o.apache.nifi.controller.FlowController This node elected Active Cluster Coordinator
2019-05-23 10:37:07,668 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,668 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,669 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=1] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6]
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=2] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7]
2019-05-23 10:37:07,694 INFO [Process Cluster Protocol Request-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5]
2019-05-23 10:37:07,695 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,699 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,700 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=3] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8]
2019-05-23 10:37:07,701 INFO [Process Cluster Protocol Request-5] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7]
2019-05-23 10:37:07,702 INFO [Process Cluster Protocol Request-1] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 19834836-9bda-41b3-8fef-4a288d90c7bf (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 33 millis
2019-05-23 10:37:07,702 INFO [Process Cluster Protocol Request-5] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 85b0bb3f-c2a6-4dfd-abd6-e9df14710c4d (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 10 millis
2019-05-23 10:37:07,703 INFO [Process Cluster Protocol Request-3] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6]
2019-05-23 10:37:07,705 INFO [Process Cluster Protocol Request-3] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 80447901-4ad3-44e3-91ad-d9f075624eae (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 31 millis
2019-05-23 10:37:07,706 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,706 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,707 INFO [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 22cdceee-c01f-445f-a091-38812e878d10 (type=RECONNECTION_REQUEST, length=3095 bytes) from 10.129.140.22:3000 in 34 millis
2019-05-23 10:37:07,708 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,708 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,709 INFO [Process Cluster Protocol Request-4] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 8605cf39-2034-4ee2-92c4-0fbe54e97fb2 (type=RECONNECTION_REQUEST, length=3013 bytes) from 10.129.140.22:3000 in 27 millis
2019-05-23 10:37:07,712 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,712 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,725 INFO [Process Cluster Protocol Request-6] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 0ca55348-44eb-416b-91dd-3d80da4c5ebe (type=RECONNECTION_REQUEST, length=3013 bytes) from 10.129.140.22:3000 in 29 millis
2019-05-23 10:37:07,725 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,728 INFO [Process Cluster Protocol Request-7] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8]
2019-05-23 10:37:07,728 INFO [Process Cluster Protocol Request-7] o.a.n.c.p.impl.SocketProtocolListener Finished processing request c9b647d7-67ac-4d0a-833b-8a0a8cc0ba6d (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 3 millis
2019-05-23 10:37:07,728 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,725 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,732 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 4 heartbeats in 2 seconds, 708 millis
2019-05-23 10:37:07,732 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,733 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,734 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,735 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,736 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,736 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,748 INFO [Process Cluster Protocol Request-8] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 434daf63-1beb-4b82-9290-bb0da4e89b7f (type=RECONNECTION_REQUEST, length=2972 bytes) from 10.129.140.22:3000 in 16 millis
2019-05-23 10:37:07,749 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster**
Set these properties on both:
nifi.web.http.host=<host>
nifi.cluster.node.address=<host>
Beware of this value visibility in network scopes:
nifi.zookeeper.connect.string=localhost:2181
e.g 'localhost', in the other side you're using real IP addr
They share it during replication, primary/coordinator node election and flow election.

Setting up a RocketMQ cluster: slave not visible & not replicating

I'm trying to set up a RocketMQ cluster, with a single name server, 1 master and 2 slaves. But, I'm running into some problems.
The version I'm running is downloaded from github/rocketmq-all-4.1.0-incubating.zip.
The brokers are run using mqbroker -c broker.conf, where broker.conf
differs for master and slave. For the master I have:
listenPort=10911
brokerName=mybroker
brokerClusterName=mybrokercluster
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
And for slaves:
listenPort=10911
brokerName=mybroker
brokerClusterName=mybrokercluster
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
The second slave has brokerId=2.
Brokers start up fine, some parts of the logs for a slave:
2017-10-02 20:31:35 INFO main - brokerRole=ASYNC_MASTER
2017-10-02 20:31:35 INFO main - flushDiskType=ASYNC_FLUSH
(...)
2017-10-02 20:31:35 INFO main - Replace, key: brokerId, value: 0 -> 1
2017-10-02 20:31:35 INFO main - Replace, key: brokerRole, value:
ASYNC_MASTER -> SLAVE
(...)
2017-10-02 20:31:37 INFO main - Set user specified name server address:
172.22.1.38:9876
2017-10-02 20:31:37 INFO ShutdownHook - Shutdown hook was invoked, 1
2017-10-02 20:31:37 INFO ShutdownHook - shutdown thread
PullRequestHoldService interrupt false
2017-10-02 20:31:37 INFO ShutdownHook - join thread PullRequestHoldService
eclipse time(ms) 0 90000
2017-10-02 20:31:37 WARN ShutdownHook - unregisterBroker Exception,
172.22.1.38:9876
org.apache.rocketmq.remoting.exception.RemotingConnectException: connect to
<172.22.1.38:9876> failed
at
org.apache.rocketmq.remoting.netty.NettyRemotingClient.invokeSync(NettyRemotingClient.java:359)
~[rocketmq-remoting-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.out.BrokerOuterAPI.unregisterBroker(BrokerOuterAPI.java:221)
~[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.out.BrokerOuterAPI.unregisterBrokerAll(BrokerOuterAPI.java:198)
~[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.BrokerController.unregisterBrokerAll(BrokerController.java:623)
[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at
org.apache.rocketmq.broker.BrokerController.shutdown(BrokerController.java:589)
[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at org.apache.rocketmq.broker.BrokerStartup$1.run(BrokerStartup.java:218)
[rocketmq-broker-4.1.0-incubating.jar:4.1.0-incubating]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
2017-10-02 20:31:37 INFO ShutdownHook - Shutdown hook over, consuming total
time(ms): 25
2017-10-02 20:31:45 INFO BrokerControllerScheduledThread1 - dispatch behind
commit log 0 bytes
2017-10-02 20:31:45 INFO BrokerControllerScheduledThread1 - Slave fall
behind master: 0 bytes
2017-10-02 20:31:45 INFO BrokerControllerScheduledThread1 - register broker
to name server 172.22.1.38:9876 OK
2017-10-02 20:32:15 INFO BrokerControllerScheduledThread1 - register broker
to name server 172.22.1.38:9876 OK
As I suspect the broker is trying to connect to the name server, which
isn't running initially, so it retries and eventually succeeds?
However, later when trying clusterList I only see one broker listed, which happens to be a slave (172.22.1.17) and has brokerId=2 in the configuration (although here it's listed as 0):
$ ./mqadmin clusterList -n 172.22.1.38:9876
#Cluster Name #Broker Name #BID #Addr
#Version #InTPS(LOAD) #OutTPS(LOAD) #PCWait(ms) #Hour
#SPACE
mybrokercluster mybroker 0 172.22.1.17:10911
V4_1_0_SNAPSHOT 0.00(0,0ms) 0.00(0,0ms) 0
418597.80 -1.0000
Moreover, when sending messages to the master, I get SLAVE_NOT_AVAILABLE.
Why is that? Are the brokers configured properly? If so, wy does
clusterList report them incorrectly?
you should change slave port,as you know 10911 has been used by anthoer process(master node),slave should be use different tcp port(eg.10921/10931 and so on)
tips:my cluster deploy on one machine,so i changed tcp port and startup successful,if you master&slave deploy on different machine and startup failed,you should visit rocketmq error log for more information.
notice:one master which have more than one slave,brokerId should be different

Hortonworks HA Namenodes gives an error "Operation category READ is not supported in state standby"

My hadoop cluster HA active namenode (host1) suddenly switch to standby namenode(host2). I could not found any error in hadoop logs (in any server) to identify the root cause.
After switching the Namenodes following error appeared in hdfs logs frequently and non of the application could read the HDFS files.
2014-07-17 01:58:53,381 WARN namenode.FSNamesystem
(FSNamesystem.java:getCorruptFiles(6769)) - Get corrupt file blocks
returned error: Operation category READ is not supported in state
standby
Once I restart the new active node(host2), namenode is switching back to new standby node(host1). Then cluster is working as normal, users also can can retrieve the HDFS files.
I'm using Hortonworks 2.1.2.0 and HDFS version 2.4.0.2.1
Edit:21st Jult 2014
Following logs were found in active namenode logs when active-standby namenode switch happen
NT_SETTINGS-1675610.csv dst=null perm=null 2014-07-20
09:06:44,746 INFO FSNamesystem.audit
(FSNamesystem.java:logAuditMessage(7755)) - allowed=true
ugi=storm (auth:SIMPLE) ip=/10.0.1.50 cmd=getfileinfo
src=/user/tungsten/staging/LEAPSET/PRODUCTS/PRODUCTS-138018
6.csv dst=null perm=null 2014-07-20 09:06:44,747 INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7755)) -
allowed=true ugi=storm (auth:SIMPLE) ip=/10.0.1.50
cmd=getfileinfo
src=/user/tungsten/staging/LEAPSET/MERCHANT_SETTINGS/MERCHA
NT_SETTINGS-1695794.csv dst=null perm=null 2014-07-20
09:06:44,747 INFO FSNamesystem.audit
(FSNamesystem.java:logAuditMessage(7755)) - allowed=true
ugi=storm (auth:SIMPLE) ip=/10.0.1.50 cmd=getfileinfo
src=/user/tungsten/staging/LEAPSET/PRODUCTS/PRODUCTS-139954
1.csv dst=null perm=null 2014-07-20 09:06:44,748 INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1095)) -
Stopping services started for active state 2014-07-20 09:06:44,750
INFO namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1153)) -
Ending log segment 842249 2014-07-20 09:06:44,752 INFO
namenode.FSEditLog (FSEditLog.java:printStatistics(673)) - Number of
transactions: 2 Total time for transactions(ms): 0 Number of
transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 4
35 2014-07-20 09:06:44,774 INFO namenode.FSEditLog
(FSEditLog.java:printStatistics(673)) - Number of transactions: 2
Total time for transactions(ms): 0 Number of transactions batched in
Syncs: 0 Number of syncs: 2 SyncTimes(ms): 24 37 2014-07-20
09:06:44,805 INFO namenode.FSNamesystem (FSNamesystem.java:run(4362))
- NameNodeEditLogRoller was interrupted, exiting 2014-07-20 09:06:44,824 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(130)) - Finalizing edits
file
/ebs/hadoop/hdfs/namenode/current/edits_inprogress_0000000000000842249
-> /ebs/hadoop/hdfs/name node/current/edits_0000000000000842249-0000000000000842250 2014-07-20
09:06:44,874 INFO blockmanagement.CacheReplicationMonitor
(CacheReplicationMonitor.java:run(168)) - Shutting down
CacheReplicationMonitor 2014-07-20 09:06:44,876 INFO
namenode.FSNamesystem (FSNamesystem.java:startStandbyServices(1136)) -
Starting services required for standby state 2014-07-20 09:06:44,927
INFO ha.EditLogTailer (EditLogTailer.java:(117)) - Will roll
logs on active node at hadoop-client-us-west-1b/10.0.254.10:8020 every
120 seconds. 2014-07-20 09:06:44,929 INFO ha.StandbyCheckpointer
(StandbyCheckpointer.java:start(129)) - Starting standby checkpoint
thread... Checkpointing active NN at
http:// hadoop-client-us-west-1b:50070 Serving checkpoints at
http:// hadoop-client-us-west-1a:50070 2014-07-20 09:06:44,930 INFO
ipc.Server (Server.java:run(2027)) - IPC Server handler 3 on 8020,
call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from
10.0.1.50:57297 Call#8431877 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not
supported in state standby 2014-07-20 09:06:44,930 INFO ipc.Server
(Server.java:run(2027)) - IPC Server handler 16 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from
10.0.1.50:57294 Call#130105071 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not
supported in state standby 2014-07-20 09:06:44,940 INFO ipc.Server
(Server.java:run(2027)) - IPC Server handler 14 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from
10.0.1.50:57294 Call#130105072 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not
supported in state standby
Edit:13th August 2014
We were able to found out root cause of namenode switching, namenode getting lots of file info requests and then namenode switching was happened.
But still could not get resolve Operation category READ is not supported in state standby error.
Edit:7th December 2014
We were found that, as the solution application need to manually connect with current active namenode once previously active namenode failed. Traffic for namenodes in HA mode are not automatically directed to active node.
I had the same issue. You need to update the client libraries. Use amabari to set up spark and have it install the client on the server. Then set your SPARK_HOME environment variable.

Hbase managed zookeeper suddenly trying to connect to localhost instead of zookeeper quorum

I was running some tests with table mappers and reducers on large scale problems. After a certain point my reducers started failing when the job was 80% done. From what I can tell when looking at the syslogs the problem is that one of my zookeepers is attempting to connect to the localhost as opposed to the other zookeepers in the quorum
Oddly it seems to do just fine connecting to the other nodes when mapping is going on, its reducing that it has a problem with. Here are selected portions of the syslog which might be relevant to figuring out whats going on
2014-06-27 09:44:01,599 INFO [main] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=hdev02:5181,hdev01:5181,hdev03:5181 sessionTimeout=10000 watcher=hconnection-0x4aee260b, quorum=hdev02:5181,hdev01:5181,hdev03:5181, baseZNode=/hbase
2014-06-27 09:44:01,612 INFO [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4aee260b connecting to ZooKeeper ensemble=hdev02:5181,hdev01:5181,hdev03:5181
2014-06-27 09:44:01,614 INFO [main-SendThread(hdev02:5181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server hdev02/172.17.43.36:5181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
2014-06-27 09:44:01,615 INFO [main-SendThread(hdev02:5181)] org.apache.zookeeper.ClientCnxn: Socket connection established to hdev02/172.17.43.36:5181, initiating session
2014-06-27 09:44:01,617 INFO [main-SendThread(hdev02:5181)] org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2014-06-27 09:44:01,723 WARN [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=hdev02:5181,hdev01:5181,hdev03:5181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
2014-06-27 09:44:01,723 INFO [main] org.apache.hadoop.hbase.util.RetryCounter: Sleeping
***
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 1 on-disk map-outputs
2014-06-27 09:55:12,012 INFO [main] org.apache.hadoop.mapred.Merger: Merging 1 sorted segments
2014-06-27 09:55:12,013 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 33206049 bytes
2014-06-27 09:55:12,208 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merged 1 segments, 33206079 bytes to disk to satisfy reduce memory limit
2014-06-27 09:55:12,209 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 2 files, 265119413 bytes from disk
2014-06-27 09:55:12,209 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2014-06-27 09:55:12,210 INFO [main] org.apache.hadoop.mapred.Merger: Merging 2 sorted segments
2014-06-27 09:55:12,212 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 265119345 bytes
2014-06-27 09:55:12,279 INFO [main] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x65afdbbb, quorum=localhost:2181, baseZNode=/hbase
2014-06-27 09:55:12,281 INFO [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65afdbbb connecting to ZooKeeper ensemble=localhost:2181
2014-06-27 09:55:12,282 INFO [main-SendThread(localhost.localdomain:2181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
2014-06-27 09:55:12,283 WARN [main-SendThread(localhost.localdomain:2181)] org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2014-06-27 09:55:12,384 WARN [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
2014-06-27 09:55:12,384 INFO [main] org.apache.hadoop.hbase.util.RetryCounter: Sleeping 1000ms before retry #0...
2014-06-27 09:55:13,385 INFO [main-SendThread(localhost.localdomain:2181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
2014-06-27 09:55:13,385 WARN [main-SendThread(localhost.localdomain:2181)] org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing
***
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
2014-06-27 09:55:13,486 ERROR [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 1 attempts
2014-06-27 09:55:13,486 WARN [main] org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x65afdbbb, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
I'm pretty sure its configured correctly, here is the relevant portion of my hbase-site.xml.
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>5181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>10000</value>
<description></description>
</property>
<property>
<name>hbase.client.retries.number</name>
<value>10</value>
<description></description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hdev01,hdev02,hdev03</value>
<description></description>
</property>
So far as I can tell hdev03 is the only server that has any problem with this. Netstating all relevant ports doesn't show me anything strange.
I've had same problem when running HBase through Spark on Yarn. Everything was fine until suddenly it started to trying to connect to localhost instead of quorum. Setting port and quorum programmatically before HBase call fixed the issue
conf.set("hbase.zookeeper.quorum","my.server")
conf.set("hbase.zookeeper.property.clientPort","5181")
I'm using MapR, and it has "unusual" (5181) zookeeper port
Hard to say what is happening with the information, given. I have found the Hadoop Stack (HBase especially) to be quite hostile to even the slightest bit of misconfiguration in DNS or the hosts file.
As the quorum in your hbase-site.xml looks good, I'd start checking with network/hostname resolution related configurations:
Has the nodename slipped into the localhost entry in /etc/hosts on hdev03?
Is there an entry for the host itself in hdev03s /etc/hosts (there should)?
Has Reverse DNS been correctly configured in case you are using DNS for name resolution instead of the hosts file?
These are just a few pointers in the direction I'd look with this kind of issue. Hope it helps!
Add '--driver-class-path ~/hbase-1.1.2/conf' into spark-submit command, so that the task can find the configured zookeeper servers instead of 127.0.0.1.

Request Apache Accumulo error help using Hadoop and Zookeeper in a test environment

I am setting up a test environment using apache accumulo ver 1.4.0, hadoop ver 0.20.2 and zookeeper ver 3.3.3. Please see below for the issue.
Hadoop and Zookeeper work great together, but when I start accumulo using the procedures on apache incubator I get the following zookeeper stream of info and a warn:
2011-12-08 20:13:56,601 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /home/hadoop/zookeeper-3.3.3/bin/../conf/zoo.cfg
2011-12-08 20:13:56,603 - WARN [main:QuorumPeerMain#105] - Either no config or no quorum defined in config, running in standalone mode
2011-12-08 20:13:56,616 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /home/hadoop/zookeeper-3.3.3/bin/../conf/zoo.cfg
2011-12-08 20:13:56,617 - INFO [main:ZooKeeperServerMain#94] - Starting server
2011-12-08 20:13:56,626 - INFO [main:Environment#97] - Server environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
2011-12-08 20:13:56,627 - INFO [main:Environment#97] - Server environment:host.name.paz
2011-12-08 20:13:56,627 - INFO [main:Environment#97] - Server environment:java.version=1.6.0_26
2011-12-08 20:13:56,628 - INFO [main:Environment#97] - Server environment:java.vendor=Sun Microsystems Inc.
2011-12-08 20:13:56,629 - INFO [main:Environment#97] - Server environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
2011-12-08 20:13:56,629 - INFO [main:Environment#97] - Server environment:java.class.path=/home/hadoop/zookeeper-3.3.3/bin/../build/classes:/home/hadoop/zookeeper-3.3.
3/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.3.3/bin/../zookeeper-3.3.3.jar:/home/hadoop/zookeeper-3.3.3/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.3.3/b
in/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.3.3/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.3.3/bin/../conf:
2011-12-08 20:13:56,630 - INFO [main:Environment#97] - Server environment:java.library.path=/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/i386/server:/usr/lib/jvm/java-6-su
n-1.6.0.26/jre/lib/i386:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
2011-12-08 20:13:56,630 - INFO [main:Environment#97] - Server environment:java.io.tmpdir=/tmp
2011-12-08 20:13:56,631 - INFO [main:Environment#97] - Server environment:java.compiler=<NA>
2011-12-08 20:13:56,631 - INFO [main:Environment#97] - Server environment:os.name=Linux
2011-12-08 20:13:56,632 - INFO [main:Environment#97] - Server environment:os.arch=i386
2011-12-08 20:13:56,633 - INFO [main:Environment#97] - Server environment:os.version=3.0.0-13-generic
2011-12-08 20:13:56,633 - INFO [main:Environment#97] - Server environment:user.name=hadoop
2011-12-08 20:13:56,634 - INFO [main:Environment#97] - Server environment:user.home=/home/hadoop
2011-12-08 20:13:56,634 - INFO [main:Environment#97] - Server environment:user.dir=/home/hadoop
2011-12-08 20:13:56,641 - INFO [main:ZooKeeperServer#663] - tickTime set to 2000
2011-12-08 20:13:56,641 - INFO [main:ZooKeeperServer#672] - minSessionTimeout set to -1
2011-12-08 20:13:56,642 - INFO [main:ZooKeeperServer#681] - maxSessionTimeout set to -1
2011-12-08 20:13:56,661 - INFO [main:NIOServerCnxn$Factory#143] - binding to port 0.0.0.0/0.0.0.0:2181
2011-12-08 20:13:56,691 - INFO [main:FileSnap#82] - Reading snapshot /home/hadoop/zoo/dataDir/version-2/snapshot.0
2011-12-08 20:13:56,708 - INFO [main:FileTxnSnapLog#208] - Snapshotting: 4e
2011-12-08 20:14:52,147 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#251] - Accepted socket connection from /0:0:0:0:0:0:0:1:40694
2011-12-08 20:14:52,153 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#777] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:40694
2011-12-08 20:14:52,154 - INFO [SyncThread:0:FileTxnLog#197] - Creating new log file: log.4f
2011-12-08 20:14:52,410 - INFO [SyncThread:0:NIOServerCnxn#1580] - Established session 0x13420623ee70000 with negotiated timeout 30000 for client /0:0:0:0:0:0:0:1:4069
4
2011-12-08 20:14:52,959 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#251] - Accepted socket connection from /127.0.0.1:38446
2011-12-08 20:14:52,962 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#777] - Client attempting to establish new session at /127.0.0.1:38446
2011-12-08 20:14:53,007 - INFO [SyncThread:0:NIOServerCnxn#1580] - Established session 0x13420623ee70001 with negotiated timeout 30000 for client /127.0.0.1:38446
2011-12-08 20:14:59,932 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#634] - EndOfStreamException: Unable to read additional data from client session
id 0x13420623ee70000, likely client has closed socket
and when I start the accumulo shell I get the following errors:
18 12:44:38,746 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:38,846 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:38,947 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,048 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,148 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,249 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,350 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,450 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
Corrected tserver memory settings to not exceed waht is allowed by the JVM. Tserver does not crash and error resolved.
Answer came from accumulo-incubator user list and is reposted at the bottom.
Basically, when going in and modifying memory settings to run in pseudo-distributed mode on a laptop I made incorrect modifications to the accumulo-site.xml and accumulo-env.sh files concerning the tablet server memory usage. The clue to the error was found in the /home/hadoop/accumulo/logs/tserver*.log file:
20 18:20:00,951 [tabletserver.NativeMap] ERROR: Failed to load native
map library
/home/hadoop/accumulo/lib/native/map/libNativeMap-Linux-i386-32.so
java.lang.UnsatisfiedLinkError: Can't load library:
/home/hadoop/accumulo/lib/native/map/libNativeMap-Linux-i386-32.so
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1706)
at java.lang.Runtime.load0(Runtime.java:770)
at java.lang.System.load(System.java:1003)
at
org.apache.accumulo.server.tabletserver.NativeMap.loadNativeLib(NativeMap.java:144)
at
org.apache.accumulo.server.tabletserver.NativeMap.<clinit>(NativeMap.java:156)
at
org.apache.accumulo.server.tabletserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:123)
at
org.apache.accumulo.server.tabletserver.TabletServer.config(TabletServer.java:2959)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3085)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:662)
20 18:20:00,999 [tabletserver.TabletServer] ERROR: Uncaught exception in
TabletServer.main, exiting
java.lang.IllegalArgumentException: Maximum tablet server map memory
134,217,728 and block cache sizes 186,646,528 is too large for this JVM
configuration 132,579,328
at
org.apache.accumulo.server.tabletserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:134)
at
org.apache.accumulo.server.tabletserver.TabletServer.config(TabletServer.java:2959)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3085)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:662)
Specific help text:
"The native libraries are not loading, which is shifting in-memory
map into the java workspace. Add to that your block cache size, and your
specifications for memory use are higher than the JVM will be allowed to
allocate. The tablet servers complain, and exit. You should see these
complaints on the accumulo monitor web pages.
You may find a benefit to rebuilding the native map library, which will
move the allocation of the in-memory map to outside the JVM. This is
not required.
The size of any memory dedicated to cache must be smaller than the size
of the JVM, which must include substantial working space for RPC calls
and garbage collection over time."
If you run the Zookeeper client (/home/hadoop/zookeeper-3.3.3/bin/zkCli.sh) and do an ls of
/accumulo/<instance uuid>/tservers
I assume you won't see any servers listed. You should see one or more tablet servers listed if Accumulo was initialized properly. Are you sure you ran the Accumulo init script after setting your Zookeeper servers in the accumulo-site.xml per the instructions?
Make sure you set "instance.zookeeper.host" to the location of your zookeeper node in the accumulo-site.xml file.
Additionally, check the logs for your tservers and loggers. If you have other configuration issues, they will not go live, which will cause the master to report having difficulties finding tservers.
Go into $ACCUMULO_HOME/conf . End the files masters,slaves and tracers to contain one single line that reads "localhost" (I assume you are doing single node)

Resources