Enabling Hadoop Namenode HA in Apache ambari using Bigtop - hadoop

I Cloned the Bigtop code from Github, built RPMS, and installed the Hadoop cluster using Ambari. The Hadoop cluster had 3 nodes. When the name node HA is enabled from Ambari, the services do not start in step 7( Start Components ), so forward steps cannot be performed. Ambari agent throws some errors when it comes to step 7.
> Ambari-2.7.5
> Hadoop-3.3.4
> Zookeeper-3.5.9
> Linux server: Centos-7
In Ambari agent i am getting the following the error in the log.
> INFO 2022-12-05 13:58:37,440 __init__.py:82 - Event from server at /user/ (correlation_id=227): {u'status': u'OK'}
> ERROR 2022-12-05 13:58:37,704 alert_ha_namenode_health.py:185 - [Alert] NameNode High Availability Health on instance-5 fails:
> Traceback (most recent call last):
> File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/alerts/alert_ha_namenode_health.py", line 175, in execute
> state_response = get_jmx(jmx_uri, connection_timeout)
> File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/alerts/alert_ha_namenode_health.py", line 206, in get_jmx
> response = urllib2.urlopen(query, timeout=connection_timeout)
> File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
> return opener.open(url, data, timeout)
> File "/usr/lib64/python2.7/urllib2.py", line 431, in open
> response = self._open(req, data)
> File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
> '_open', req)
> File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
> result = func(*args)
> File "/usr/lib64/python2.7/urllib2.py", line 1244, in http_open
> return self.do_open(httplib.HTTPConnection, req)
> File "/usr/lib64/python2.7/urllib2.py", line 1214, in do_open
> raise URLError(err)
> URLError: <urlopen error [Errno 111] Connection refused>
Ambari UI While trying to Enable HA
Namenode Log
> 2022-12-08 07:12:52,477 INFO namenode.FSEditLog
> (FSEditLog.java:printStatistics(794)) - Number of transactions: 168
> Total time for transactions(ms): 19 Number of transactions batched in
> Syncs: 4638 Number of syncs: 143 SyncTimes(ms): 268 2022-12-08
> 07:12:52,481 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/38b9c9dc549394c67f819f03f02ef67c/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,481 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/54a35f5c02b78671aa3cb1a555d4f13d/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,486 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/65daedd407be5de5db0a6df509451aac/recovered.edits/25.seqid is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,489 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.STATS/2b8248a14c2e38b71595200ae6cb6e9c/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,489 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/90de5ae3b5daaf334553cb73d3cffe96/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,495 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/8d944f63f2eec5d95fbf16a45eda13d1/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,496 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/1300.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,526 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/611f59084c88307ee011a520a00cf1eb/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,529 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/5dbe5b39d39be33e7c549caab73352ba/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,530 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/ba0d8e224fd563575687aab2afa1de39/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,534 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/2760ea047005151d035b988ef5a526f5/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,536 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/f8129cf7b0102dabf7ec9683513c1feb/recovered.edits/25.seqid is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,566 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.TASK/8667cc4454dd118504027c07ed8c150f/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,567 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/a023890792b51c1551471baaf3f385f5/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,567 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/67a1e7ae67d76782a2adbaae3b2687a9/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,582 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/b59bdfcad8908003169571d01f135278/recovered.edits/22.seqid is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,583 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/4c62db7bf1c479ede323e53d21d9a41f/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,583 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/3ea53d95ec4227d5f5fc24c5e79a801f/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,583 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/4604f896ccf933960a21a4f81edfa23f/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,584 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/f0030813232204d5165972d0b99e4a82/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,586 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/0533eb20acb1aa3233403c30dcff05a6/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,612 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/303cb7d7b8ce11cbc4df9bc83e122898/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,615 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/7a766993e99daeb64e3d2219ca8c9427/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,616 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/4a3abf63944203c5020a3f2ef5749c44/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,629 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/6be9c041387ca418f6c35f91135a3b7e/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,629 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/210c8496760d4754384b27bf3b45c072/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,655 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/df5d0ccbe8941453697c89984985ecd1/recovered.edits/22.seqid is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,655 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.CATALOG/ee1348d962c03994578d6bc8f0389831/recovered.edits/43.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,655 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/afa9666d2296f1d7c8629bbc7fc84073/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,662 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/52b00f10bbcede600fe5f434b34629e9/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,665 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/a3fa94f77440b13d37602dace771aa7d/recovered.edits/29.seqid is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,678 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/6c02cf70d5286b712c89a5f136167476/recovered.edits/26.seqid is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,679 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/526cacd3866c13e1c229baa123108783/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,683 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.SEQUENCE/9db3f45bdef2439677f1d7cfed837e24/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,705 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/498c428ad17956df4843f78642957840/recovered.edits/23.seqid is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,706 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/hbase/namespace/bb7e9d3268a1cdf39f49426f0014cf37/recovered.edits/33.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,708 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.MUTEX/c7ec76e62616a4da4f8b81230921dd78/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,709 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/6f89e061764c1ec1a64b608e6e18c62c/recovered.edits/22.seqid is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,710 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.FUNCTION/85740d4c80415117ddf7c8321cabd250/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,714 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/a26508adf2909c8bca5c4f02985bb096/recovered.edits/25.seqid is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,725 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/049bc3d9d7d676e72af6e31163db4364/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,725 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/c96562dd2509624dfe49f1586652e083/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,726 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/65b3fbf36173133b8b368e7fa7b37306/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,727 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/58986e109c517c6893f00bd9c749144a/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,730 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/11de2f3f2cc497467a17e3ba7a32c50a/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,733 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/a39782166928fc109f1c752a47b59ed1/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,750 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/17caf4e0a9ed1bb532569a413f59abe1/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,753 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/1841e792cdbfa7f1f4536da504c68560/recovered.edits/22.seqid is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,753 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/3783addc127d101a5843032326989852/recovered.edits/19.seqid
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:12:52,772 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.LOG/f092c53f69188d676108ab1dc970629b/recovered.edits/22.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,774 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/TEST_MERGE_SPLIT/a53c806891e4e8573000bb3deb31895e/recovered.edits/25.seqid is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,776 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/data/default/SYSTEM.CHILD_LINK/78e4a7a00c64a00f1839d32d1a6d61d6/recovered.edits/25.seqid
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,874 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/WALs/ambari-agent-02.ambari,16020,1670481872455/ambari-agent-02.ambari%2C16020%2C1670481872455.1670481890182
> is closed by DFSClient_NONMAPREDUCE_1334295705_1 2022-12-08
> 07:12:52,877 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/WALs/ambari-agent-03.ambari,16020,1670481872397/ambari-agent-03.ambari%2C16020%2C1670481872397.meta.1670481887307.meta
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,898 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/WALs/ambari-agent-03.ambari,16020,1670481872397/ambari-agent-03.ambari%2C16020%2C1670481872397.1670481889996
> is closed by DFSClient_NONMAPREDUCE_-62228341_1 2022-12-08
> 07:12:52,913 INFO hdfs.StateChange
> (FSNamesystem.java:completeFile(2979)) - DIR* completeFile:
> /apps/hbase/data/WALs/ambari-agent-01.ambari,16020,1670481873184/ambari-agent-01.ambari%2C16020%2C1670481873184.1670481890944
> is closed by DFSClient_NONMAPREDUCE_1463686342_1 2022-12-08
> 07:15:43,548 INFO namenode.FSEditLog
> (FSEditLog.java:printStatistics(794)) - Number of transactions: 344
> Total time for transactions(ms): 32 Number of transactions batched in
> Syncs: 4690 Number of syncs: 268 SyncTimes(ms): 423 2022-12-08
> 07:15:43,548 INFO hdfs.StateChange
> (FSNamesystem.java:enterSafeMode(4721)) - STATE* Safe mode is ON. It
> was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn
> safe mode off. 2022-12-08 07:16:03,457 INFO namenode.FSImage
> (FSImage.java:saveNamespace(1146)) - Save namespace ... 2022-12-08
> 07:16:03,458 INFO namenode.FSEditLog
> (FSEditLog.java:endCurrentLogSegment(1426)) - Ending log segment 4614,
> 4957 2022-12-08 07:16:03,459 INFO namenode.FSEditLog
> (FSEditLog.java:printStatistics(794)) - Number of transactions: 345
> Total time for transactions(ms): 32 Number of transactions batched in
> Syncs: 4690 Number of syncs: 269 SyncTimes(ms): 425 2022-12-08
> 07:16:03,462 INFO namenode.FileJournalManager
> (FileJournalManager.java:finalizeLogSegment(145)) - Finalizing edits
> file
> /hadoop/hdfs/namenode/current/edits_inprogress_0000000000000004614 ->
> /hadoop/hdfs/namenode/current/edits_0000000000000004614-0000000000000004958
> 2022-12-08 07:16:03,489 INFO namenode.FSImageFormatProtobuf
> (FSImageFormatProtobuf.java:save(511)) - Saving image file
> /hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000004958 using
> no compression 2022-12-08 07:16:03,566 INFO
> namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(515))
> - Image file /hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000004958 of size
> 43959 bytes saved in 0 seconds . 2022-12-08 07:16:03,572 INFO
> namenode.NNStorageRetentionManager
> (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to
> retain 2 images with txid >= 1160 2022-12-08 07:16:03,572 INFO
> namenode.NNStorageRetentionManager
> (NNStorageRetentionManager.java:purgeImage(226)) - Purging old image
> FSImageFile(file=/hadoop/hdfs/namenode/current/fsimage_0000000000000000000,
> cpktTxId=0000000000000000000) 2022-12-08 07:16:03,582 INFO
> namenode.FSEditLog (FSEditLog.java:startLogSegment(1381)) - Starting
> log segment at 4959 2022-12-08 07:16:03,593 INFO
> namenode.FSNamesystem (FSNamesystem.java:saveNamespace(4547)) - New
> namespace image has been created 2022-12-08 07:16:53,274 ERROR
> namenode.NameNode (LogAdapter.java:error(75)) - RECEIVED SIGNAL 15:
> SIGTERM 2022-12-08 07:16:53,278 INFO namenode.FSImage
> (FSImage.java:lambda$run$0(1053)) - FSImageSaver clean checkpoint:
> txid=4958 when meet shutdown. 2022-12-08 07:16:53,278 INFO
> namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ambari-server/172.20.0.2
> ************************************************************/
While executing sudo su hdfs -l -c 'hdfs namenode -initializeSharedEdits' command below log is printing
Command Log

Related

hbase import module don't succeed

I have to move some hbase tables from one hadoop cluster to another. I have extracted the tables using
bin/hbase org.apache.hadoop.hbase.mapreduce.Export \ <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
and I've put the return files into HDFS on my new cluster.
But when I try bin/hbase org.apache.hadoop.hbase.mapreduce.Import , I have the strange following logs:
hadoop#edgenode:~$ hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.Import ADCP /hbase/backup_hbase/ADCP/2022-07-04_1546/ADCP/
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
2022-10-03 11:19:09,689 INFO [main] mapreduce.Import: writing directly to table from Mapper.
2022-10-03 11:19:09,847 INFO [main] client.RMProxy: Connecting to ResourceManager at /172.16.42.42:8032
2022-10-03 11:19:09,983 INFO [main] Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2022-10-03 11:19:10,043 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
2022-10-03 11:19:10,043 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:host.name=edgenode
2022-10-03 11:19:10,043 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_342
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.vendor=Private Build
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: hadoop-yarn-client-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.3.3.jar
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop/lib/native
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.name=Linux
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.version=5.15.0-1018-kvm
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:user.name=hadoop
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.memory.free=174MB
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.memory.max=3860MB
2022-10-03 11:19:10,045 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.memory.total=237MB
2022-10-03 11:19:10,048 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Initiating client connection, connectString=namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$15/257950720#1124fc36
2022-10-03 11:19:10,054 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2022-10-03 11:19:10,061 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes
2022-10-03 11:19:10,069 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=
2022-10-03 11:19:10,077 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-SendThread(namenode:2181)] zookeeper.ClientCnxn: Opening socket connection to server namenode/172.16.42.42:2181. Will not attempt to authenticate using SASL (unknown error)
2022-10-03 11:19:10,084 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-SendThread(namenode:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.16.42.187:48598, server: namenode/172.16.42.42:2181
2022-10-03 11:19:10,120 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-SendThread(namenode:2181)] zookeeper.ClientCnxn: Session establishment complete on server namenode/172.16.42.42:2181, sessionid = 0x1b000002cb790005, negotiated timeout = 40000
2022-10-03 11:19:11,001 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Session: 0x1b000002cb790005 closed
2022-10-03 11:19:11,001 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x1b000002cb790005
2022-10-03 11:19:15,366 INFO [main] input.FileInputFormat: Total input files to process : 32
2022-10-03 11:19:15,660 INFO [main] mapreduce.JobSubmitter: number of splits:32
2022-10-03 11:19:15,902 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1664271607293_0002
2022-10-03 11:19:16,225 INFO [main] conf.Configuration: resource-types.xml not found
2022-10-03 11:19:16,225 INFO [main] resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-10-03 11:19:16,231 INFO [main] resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
2022-10-03 11:19:16,231 INFO [main] resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
2022-10-03 11:19:16,293 INFO [main] impl.YarnClientImpl: Submitted application application_1664271607293_0002
2022-10-03 11:19:16,328 INFO [main] mapreduce.Job: The url to track the job: http://namenode:8088/proxy/application_1664271607293_0002/
2022-10-03 11:19:16,329 INFO [main] mapreduce.Job: Running job: job_1664271607293_0002
2022-10-03 11:19:31,513 INFO [main] mapreduce.Job: Job job_1664271607293_0002 running in uber mode : false
2022-10-03 11:19:31,514 INFO [main] mapreduce.Job: map 0% reduce 0%
2022-10-03 11:19:31,534 INFO [main] mapreduce.Job: 2-10-03 11:19:31.345]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2022-10-03 11:19:31.346]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
For more detailed output, check the application tracking page: http://namenode:8088/cluster/app/application_1664271607293_0002 Then click on links to logs of each attempt.
. Failing the application.
2022-10-03 11:19:31,552 INFO [main] mapreduce.Job: Counters: 0
I don't understand what the problem could be. I went to http://namenode:8088/cluster/app/application_1664271607293_0002 but i didn't found nothing interesting. I've tried the command with different tables but get the same result. The two clusters are not one the same version but I read that it wasn't a problem. Every service works well on my clusters and I can use hbase commands on the hbase shell without any problem. Also, map reduce programs works well on my new cluster. I've also tested the copyTable and snapchot methods for the data migration, which didn't work either.
Any idea of what should be the problem? Thanks! :)
update :
I found this on a datanode syslog in the hadoop web interface, may be useful?
2022-10-04 14:12:39,341 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration
2022-10-04 14:12:39,354 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2022-10-04 14:12:39,493 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 7 cluster_timestamp: 1664271607293 } attemptId: 2 } keyId: -896624238)
2022-10-04 14:12:39,536 INFO [main] org.apache.hadoop.conf.Configuration: resource-types.xml not found
2022-10-04 14:12:39,536 INFO [main] org.apache.hadoop.yarn.util.resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-10-04 14:12:39,636 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:73)
at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
at org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils.newJobId(MRBuilderUtils.java:39)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:298)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1745)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1742)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1673)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:70)
... 10 more
Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder.setAppId(Lorg/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto;)Lorg/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder; #36: invokevirtual
Reason:
Type 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage'
Current Frame:
bci: #36
flags: { }
locals: { 'org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
stack: { 'com/google/protobuf/SingleFieldBuilder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
Bytecode:
0x0000000: 2ab4 0011 c700 1b2b c700 0bbb 002f 59b7
0x0000010: 0030 bf2a 2bb5 000a 2ab6 0031 a700 0c2a
0x0000020: b400 112b b600 3257 2a59 b400 1304 80b5
0x0000030: 0013 2ab0
Stackmap Table:
same_frame(#19)
same_frame(#31)
same_frame(#40)
at org.apache.hadoop.mapreduce.v2.proto.MRProtos$JobIdProto.newBuilder(MRProtos.java:1017)
at org.apache.hadoop.mapreduce.v2.api.records.impl.pb.JobIdPBImpl.<init>(JobIdPBImpl.java:37)
... 15 more
2022-10-04 14:12:39,641 ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:73)
at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
at org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils.newJobId(MRBuilderUtils.java:39)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:298)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1745)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1742)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1673)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:70)
... 10 more
Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder.setAppId(Lorg/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto;)Lorg/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder; #36: invokevirtual
Reason:
Type 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage'
Current Frame:
bci: #36
flags: { }
locals: { 'org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
stack: { 'com/google/protobuf/SingleFieldBuilder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
Bytecode:
0x0000000: 2ab4 0011 c700 1b2b c700 0bbb 002f 59b7
0x0000010: 0030 bf2a 2bb5 000a 2ab6 0031 a700 0c2a
0x0000020: b400 112b b600 3257 2a59 b400 1304 80b5
0x0000030: 0013 2ab0
Stackmap Table:
same_frame(#19)
same_frame(#31)
same_frame(#40)
at org.apache.hadoop.mapreduce.v2.proto.MRProtos$JobIdProto.newBuilder(MRProtos.java:1017)
at org.apache.hadoop.mapreduce.v2.api.records.impl.pb.JobIdPBImpl.<init>(JobIdPBImpl.java:37)
... 15 more
2022-10-04 14:12:39,643 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException

Spring-boot application deploying issue

I created a spring-boot application using a tutorial and it was built successfully using 'mvn clean install' command. After that I execute the 'mvn spring-boot:run' command to run the application and it also successfully deployed. However, when I was loading the page on the browser by hitting http://localhost:8080/api, it always redirects to http://localhost:8080/login which I had deployed a few months ago. How should I delete deployment related to http://localhost:8080/login?
my controller class is as follows,
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import mat.pathini.model.Customer;
import mat.pathini.repo.CustomerRepository;
#CrossOrigin(origins = "http://localhost:4200")
#RestController
#RequestMapping("/api")
public class CustomerController {
#Autowired
CustomerRepository repository;
#GetMapping("/customers")
public List<Customer> getAllCustomers() {
System.out.println("Get all Customers...");
List<Customer> customers = new ArrayList<>();
repository.findAll().forEach(customers::add);
return customers;
}
#PostMapping("/customer")
public Customer postCustomer(#RequestBody Customer customer) {
Customer _customer = repository.save(new Customer(customer.getName(), customer.getAge()));
return _customer;
}
#DeleteMapping("/customer/{id}")
public ResponseEntity<String> deleteCustomer(#PathVariable("id") long id) {
System.out.println("Delete Customer with ID = " + id + "...");
repository.deleteById(id);
return new ResponseEntity<>("Customer has been deleted!", HttpStatus.OK);
}
#GetMapping("customers/age/{age}")
public List<Customer> findByAge(#PathVariable int age) {
List<Customer> customers = repository.findByAge(age);
return customers;
}
#PutMapping("/customer/{id}")
public ResponseEntity<Customer> updateCustomer(#PathVariable("id") long id, #RequestBody Customer customer) {
System.out.println("Update Customer with ID = " + id + "...");
Optional<Customer> customerData = repository.findById(id);
if (customerData.isPresent()) {
Customer _customer = customerData.get();
_customer.setName(customer.getName());
_customer.setAge(customer.getAge());
_customer.setActive(customer.isActive());
return new ResponseEntity<>(repository.save(_customer), HttpStatus.OK);
} else {
return new ResponseEntity<>(HttpStatus.NOT_FOUND);
}
}
}
The tutorial that I followed is,
https://grokonez.com/frontend/vue-js/spring-boot-vue-js-example-spring-data-jpa-rest-mysql-crud
Logs as follows
> Downloading from central: https://repo.maven.apache.org/maven2/commons-logging/commons-logging-api/1
> .1/commons-logging-api-1.1.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-registry
> /2.0.8/maven-plugin-registry-2.0.8.jar (29 kB at 7.8 kB/s) Downloading
> from central:
> https://repo.maven.apache.org/maven2/com/google/collections/google-collect
> ions/1.0/google-collections-1.0.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/commons-logging/commons-logging-api/1.
> 1/commons-logging-api-1.1.jar (45 kB at 10 kB/s) Downloading from
> central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-archiver/2
> .8.1/plexus-archiver-2.8.1.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/xbean/xbean-reflect/3.4/xbe
> an-reflect-3.4.jar (134 kB at 29 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-io/2.3.2/p
> lexus-io-2.3.2.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-container-d
> efault/1.5.5/plexus-container-default-1.5.5.jar (217 kB at 47 kB/s)
> Downloading from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-shade-
> plugin/2.2/maven-shade-plugin-2.2.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/log4j/log4j/1.2.12/log4j-1.2.12.jar
> (3 58 kB at 76 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/maven-compat/3.0/mav
> en-compat-3.0.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/com/google/collections/google-collecti
> ons/1.0/google-collections-1.0.jar (640 kB at 130 kB/s) Downloading
> from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/wagon/wagon-provider
> -api/1.0-beta-6/wagon-provider-api-1.0-beta-6.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-archiver/2.
> 8.1/plexus-archiver-2.8.1.jar (143 kB at 29 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/asm/asm/3.3.1/asm-3.3.1.jar
> Downloaded from central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-io/2.3.2/pl
> exus-io-2.3.2.jar (74 kB at 15 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/asm/asm-commons/3.3.1/asm-commons-3.3
> .1.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-shade-p
> lugin/2.2/maven-shade-plugin-2.2.jar (100 kB at 20 kB/s) Downloading
> from central:
> https://repo.maven.apache.org/maven2/asm/asm-tree/3.3.1/asm-tree-3.3.1.jar
>
> Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/maven-compat/3.0/mave
> n-compat-3.0.jar (285 kB at 54 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/org/jdom/jdom/1.1/jdom-1.1.jar
> Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/wagon/wagon-provider-
> api/1.0-beta-6/wagon-provider-api-1.0-beta-6.jar (53 kB at 9.9 kB/s)
> Downloading from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-depende
> ncy-tree/2.1/maven-dependency-tree-2.1.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/asm/asm/3.3.1/asm-3.3.1.jar (44
> kB at
> 8.1 kB/s) Downloading from central: https://repo.maven.apache.org/maven2/org/vafer/jdependency/0.7/jdependency
> -0.7.jar Downloaded from central: https://repo.maven.apache.org/maven2/asm/asm-commons/3.3.1/asm-commons-3.3.
> 1.jar (38 kB at 7.0 kB/s) Downloaded from central: https://repo.maven.apache.org/maven2/asm/asm-tree/3.3.1/asm-tree-3.3.1.jar
> (22 kB at 3.9 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/commons-io/commons-io/1.3.2/commons-i
> o-1.3.2.jar Downloading from central:
> https://repo.maven.apache.org/maven2/asm/asm-analysis/3.2/asm-analysis-3.2
> .jar Downloaded from central:
> https://repo.maven.apache.org/maven2/org/vafer/jdependency/0.7/jdependency-
> 0.7.jar (12 kB at 2.0 kB/s) Downloading from central: https://repo.maven.apache.org/maven2/asm/asm-util/3.2/asm-util-3.2.jar
> Downloaded from central:
> https://repo.maven.apache.org/maven2/org/jdom/jdom/1.1/jdom-1.1.jar
> (153 kB at 26 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/com/google/guava/guava/11.0.2/guava-1
> 1.0.2.jar Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-dependen
> cy-tree/2.1/maven-dependency-tree-2.1.jar (60 kB at 10 kB/s)
> Downloaded from central:
> https://repo.maven.apache.org/maven2/asm/asm-analysis/3.2/asm-analysis-3.2.
> jar (18 kB at 3.0 kB/s) Downloaded from central:
> https://repo.maven.apache.org/maven2/commons-io/commons-io/1.3.2/commons-io
> -1.3.2.jar (88 kB at 15 kB/s) Downloaded from central: https://repo.maven.apache.org/maven2/asm/asm-util/3.2/asm-util-3.2.jar
> (37 kB at 5.8 kB/s) Downloaded from central:
> https://repo.maven.apache.org/maven2/com/google/guava/guava/11.0.2/guava-11
> .0.2.jar (1.6 MB at 222 kB/s) [INFO] [INFO] ---
> maven-install-plugin:2.5.2:install (default-install) #
> spring-boot-restapi-mysql --- Downloading from central:
> https://repo.maven.apache.org/maven2/junit/junit/3.8.1/junit-3.8.1.pom
> Downloaded from central:
> https://repo.maven.apache.org/maven2/junit/junit/3.8.1/junit-3.8.1.pom
> (998 B at 2.5 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/commons-codec/commons-codec/1.6/commo
> ns-codec-1.6.pom Downloaded from central:
> https://repo.maven.apache.org/maven2/commons-codec/commons-codec/1.6/common
> s-codec-1.6.pom (11 kB at 25 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/org/apache/commons/commons-parent/22/
> commons-parent-22.pom Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/commons/commons-parent/22/c
> ommons-parent-22.pom (42 kB at 97 kB/s) Downloading from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-shared-
> utils/0.4/maven-shared-utils-0.4.pom Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-shared-u
> tils/0.4/maven-shared-utils-0.4.pom (4.0 kB at 5.0 kB/s) Downloading
> from central:
> https://repo.maven.apache.org/maven2/junit/junit/3.8.1/junit-3.8.1.jar
> Downloading from central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-utils/3.0.
> 15/plexus-utils-3.0.15.jar Downloading from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-shared-
> utils/0.4/maven-shared-utils-0.4.jar Downloading from central:
> https://repo.maven.apache.org/maven2/commons-codec/commons-codec/1.6/commo
> ns-codec-1.6.jar Downloading from central:
> https://repo.maven.apache.org/maven2/classworlds/classworlds/1.1-alpha-2/c
> lassworlds-1.1-alpha-2.jar Downloaded from central:
> https://repo.maven.apache.org/maven2/classworlds/classworlds/1.1-alpha-2/cl
> assworlds-1.1-alpha-2.jar (38 kB at 65 kB/s) Downloaded from central:
> https://repo.maven.apache.org/maven2/junit/junit/3.8.1/junit-3.8.1.jar
> (121 kB at 170 kB/s) Downloaded from central:
> https://repo.maven.apache.org/maven2/commons-codec/commons-codec/1.6/common
> s-codec-1.6.jar (233 kB at 271 kB/s) Downloaded from central:
> https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-shared-u
> tils/0.4/maven-shared-utils-0.4.jar (155 kB at 175 kB/s) Downloaded
> from central:
> https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-utils/3.0.1
> 5/plexus-utils-3.0.15.jar (239 kB at 253 kB/s) [INFO] Installing
> D:\MyWork\Project\Pathini\matrimonial-api\target\spring-boot-restapi-mysql-0.0.1-S
> NAPSHOT.jar to
> C:\Users\User\.m2\repository\com\grokonez\spring-boot-restapi-mysql\0.0.1-SNAPSHOT\sp
> ring-boot-restapi-mysql-0.0.1-SNAPSHOT.jar [INFO] Installing
> D:\MyWork\Project\Pathini\matrimonial-api\pom.xml to
> C:\Users\User\.m2\repository\
> com\grokonez\spring-boot-restapi-mysql\0.0.1-SNAPSHOT\spring-boot-restapi-mysql-0.0.1-SNAPSHOT.pom
> [INFO]
> ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO]
> ------------------------------------------------------------------------ [INFO] Total time: 03:54 min [INFO] Finished at:
> 2019-09-07T11:45:05+08:00 [INFO]
> ------------------------------------------------------------------------
>
> D:\MyWork\Project\Pathini\matrimonial-api>mvn spring-boot:run [INFO]
> Scanning for projects... [INFO] [INFO] ---------------<
> com.grokonez:spring-boot-restapi-mysql >--------------- [INFO]
> Building SpringBootRestMySQL 0.0.1-SNAPSHOT [INFO]
> --------------------------------[ jar ]--------------------------------- [INFO] [INFO] >>>
> spring-boot-maven-plugin:2.0.5.RELEASE:run (default-cli) >
> test-compile # spring-boot-res tapi-mysql >>> [INFO] [INFO] ---
> maven-resources-plugin:3.0.2:resources (default-resources) #
> spring-boot-restapi-mysql --
> - [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] Copying 0 resource [INFO] [INFO] ---
> maven-compiler-plugin:3.7.0:compile (default-compile) #
> spring-boot-restapi-mysql --- [INFO] Nothing to compile - all classes
> are up to date [INFO] [INFO] ---
> maven-resources-plugin:3.0.2:testResources (default-testResources) #
> spring-boot-restapi- mysql --- [INFO] Using 'UTF-8' encoding to copy
> filtered resources. [INFO] Copying 0 resource [INFO] [INFO] ---
> maven-compiler-plugin:3.7.0:testCompile (default-testCompile) #
> spring-boot-restapi-mysql --- [INFO] Nothing to compile - all classes
> are up to date [INFO] [INFO] <<<
> spring-boot-maven-plugin:2.0.5.RELEASE:run (default-cli) <
> test-compile # spring-boot-res tapi-mysql <<< [INFO] [INFO] [INFO] ---
> spring-boot-maven-plugin:2.0.5.RELEASE:run (default-cli) #
> spring-boot-restapi-mysql ---
>
> . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __
> _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / /
> =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.0.5.RELEASE)
>
> 2019-09-07 11:55:59.573 INFO 7692 --- [ main]
> SpringBootRestMySqlApplication : Starting
> SpringBootRestMySqlApplication on HP-PC with PID 7692
> (D:\MyWork\Project\Pathini\matrimonia l-api\target\classes started by
> User in D:\MyWork\Project\Pathini\matrimonial-api) 2019-09-07
> 11:55:59.587 INFO 7692 --- [ main]
> SpringBootRestMySqlApplication : No active profile set,
> falling back to default profiles: default 2019-09-07 11:55:59.689
> INFO 7692 --- [ main]
> ConfigServletWebServerApplicationContext : Refreshing
> org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationC
> ontext#7c851b1f: startup date [Sat Sep 07 11:55:59 SGT 2019]; root of
> context hierarchy 2019-09-07 11:56:00.698 WARN 7692 --- [
> main] o.s.b.a.AutoConfigurationPackages :
> #EnableAutoConfiguration was declared on a class in the default
> package. Automatic #Repository and # Entity scanning is not enabled.
> 2019-09-07 11:56:01.254 INFO 7692 --- [ main]
> trationDelegate$BeanPostProcessorChecker : Bean
> 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration'
> of type [o
> rg.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringC
> GLIB$$81417af9] is not eligible for getting processed by all
> BeanPostProcessors (for example: not el igible for auto-proxying)
> 2019-09-07 11:56:02.102 INFO 7692 --- [ main]
> o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with
> port(s): 8080 (http) 2019-09-07 11:56:02.141 INFO 7692 --- [
> main] o.apache.catalina.core.StandardService : Starting service
> [Tomcat] 2019-09-07 11:56:02.141 INFO 7692 --- [ main]
> org.apache.catalina.core.StandardEngine : Starting Servlet Engine:
> Apache Tomcat/8.5.34 2019-09-07 11:56:02.153 INFO 7692 ---
> [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR
> based Apache Tomcat Native library which allows optimal performance in
> production environmen ts was not found on the java.library.path:
> [C:\Program Files\Java\jdk1.8.0_131\bin;C:\Windows\Sun\Ja
> va\bin;C:\Windows\system32;C:\Windows;C:\Program
> Files\Java\jdk1.8.0_131\bin;C:\Program Files\nodejs
> \;D:\MyWork\Project\spring-2.0.4.RELEASE\bin;%PATH%;C:\ProgramData\chocolatey\bin;C:\Program
> Files\A pache\maven\bin;C:\Program Files\Apache\maven\bin;C:\Program
> Files\Git\cmd;C:\Program Files\PuTTY\;C
> :\Users\User\AppData\Roaming\npm;C:\Program Files (x86)\Google\Cloud
> SDK\google-cloud-sdk\bin;D:\MyW
> ork\Project\spring-2.0.4.RELEASE\bin;C:\Program
> Files\Java\jdk1.8.0_131\bin;C:\Program Files\nodejs\
> ;D:\MyWork\Project\spring-2.0.4.RELEASE\bin;C:\Program
> Files\Java\jdk1.8.0_131\bin;C:\Program Files\
> nodejs\;D:\MyWork\Project\spring-2.0.4.RELEASE\bin;C:\Program
> Files\Java\jdk1.8.0_131\bin;C:\Program
> Files\nodejs\;D:\MyWork\Project\spring-2.0.4.RELEASE\bin;C:\Windows\System32;C:\ProgramData\chocola
> tey\bin;C:\Program Files\Apache\maven\bin;C:\Program
> Files\Apache\maven\bin;C:\Program Files\Git\cmd ;C:\Program
> Files\PuTTY\;C:\ProgramData\chocolatey\bin;C:\Program
> Files\Apache\maven\bin;C:\Program Files\Apache\maven\bin;C:\Program
> Files\Git\cmd;C:\Program Files\PuTTY\;C:\ProgramData\chocolatey\bi
> n;C:\Program Files\Apache\maven\bin;C:\Program
> Files\Apache\maven\bin;C:\Program Files\Git\cmd;C:\Pr ogram
> Files\PuTTY\;D:\MyWork\Project\spring-2.0.4.RELEASE\bin;C:\Program
> Files\Java\jdk1.8.0_131\bin ;C:\Program
> Files\nodejs\;D:\MyWork\Project\spring-2.0.4.RELEASE\bin;C:\Program
> Files\Java\jdk1.8.0_
> 131\bin;C:\Program;C:\Users\User\AppData\Local\Programs\Microsoft VS
> Code\bin;.] 2019-09-07 11:56:02.271 INFO 7692 --- [ost-startStop-1]
> o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring
> embedded WebApplicationContext 2019-09-07 11:56:02.272 INFO 7692 ---
> [ost-startStop-1] o.s.web.context.ContextLoader : Root
> WebApplicationContext: initialization completed in 2588 ms 2019-09-07
> 11:56:02.431 INFO 7692 --- [ost-startStop-1]
> o.s.b.w.servlet.FilterRegistrationBean : Mapping filter:
> 'characterEncodingFilter' to: [/*] 2019-09-07 11:56:02.432 INFO 7692
> --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2019-09-07
> 11:56:02.432 INFO 7692 --- [ost-startStop-1]
> o.s.b.w.servlet.FilterRegistrationBean : Mapping filter:
> 'httpPutFormContentFilter' to: [/*] 2019-09-07 11:56:02.432 INFO 7692
> --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2019-09-07
> 11:56:02.433 INFO 7692 --- [ost-startStop-1]
> .s.DelegatingFilterProxyRegistrationBean : Mapping filter:
> 'springSecurityFilterChain' to: [/*] 2019-09-07 11:56:02.434 INFO
> 7692 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean :
> Servlet dispatcherServlet mapped to [/] 2019-09-07 11:56:02.650 INFO
> 7692 --- [ main] com.zaxxer.hikari.HikariDataSource :
> HikariPool-1 - Starting... 2019-09-07 11:56:02.889 INFO 7692 --- [
> main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start
> completed. 2019-09-07 11:56:02.956 INFO 7692 --- [ main]
> j.LocalContainerEntityManagerFactoryBean : Building JPA container
> EntityManagerFactory for persistence unit 'default' 2019-09-07
> 11:56:02.980 INFO 7692 --- [ main]
> o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing
> PersistenceUnitInfo [
> name: default
> ...] 2019-09-07 11:56:03.099 INFO 7692 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core
> {5.2.17.Final} 2019-09-07 11:56:03.101 INFO 7692 --- [
> main] org.hibernate.cfg.Environment : HHH000206:
> hibernate.properties not found 2019-09-07 11:56:03.161 INFO 7692 ---
> [ main] o.hibernate.annotations.common.Version :
> HCANN000001: Hibernate Commons Annotations {5.0.1.Final} 2019-09-07
> 11:56:03.313 INFO 7692 --- [ main]
> org.hibernate.dialect.Dialect : HHH000400: Using dialect:
> org.hibernate.dialect.MySQL5Dialect 2019-09-07 11:56:03.664 INFO 7692
> --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
> 2019-09-07 11:56:03.798 INFO 7692 --- [ main]
> o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path
> [/**/favicon.ico] onto handler of type [class
> org.springframework.web.servlet.resour ce.ResourceHttpRequestHandler]
> 2019-09-07 11:56:04.089 INFO 7692 --- [ main]
> s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for
> #ControllerAdvice:
> org.springframework.boot.web.servlet.context.AnnotationConfigServletW
> ebServerApplicationContext#7c851b1f: startup date [Sat Sep 07 11:55:59
> SGT 2019]; root of context hi erarchy 2019-09-07 11:56:04.155 WARN
> 7692 --- [ main] aWebConfiguration$JpaWebMvcConfiguration :
> spring.jpa.open-in-view is enabled by default. Therefore, database
> queries may be performed during v iew rendering. Explicitly configure
> spring.jpa.open-in-view to disable this warning 2019-09-07
> 11:56:04.219 INFO 7692 --- [ main]
> s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto
> public
> org.springframework.http.ResponseEntity<java.util.Map<java.lang.Stri
> ng, java.lang.Object>>
> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController
> .error(javax.servlet.http.HttpServletRequest) 2019-09-07 11:56:04.221
> INFO 7692 --- [ main]
> s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped
> "{[/error],produces=[text/html]}" onto public
> org.springframework.web.servlet.ModelAndView or
> g.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.
> http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
> 2019-09-07 11:56:04.590 INFO 7692 --- [ main]
> o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path
> [/webjars/**] onto handler of type [class
> org.springframework.web.servlet.resource.R esourceHttpRequestHandler]
> 2019-09-07 11:56:04.591 INFO 7692 --- [ main]
> o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto
> handler of type [class
> org.springframework.web.servlet.resource.ResourceH ttpRequestHandler]
> 2019-09-07 11:56:05.063 INFO 7692 --- [ main]
> .s.s.UserDetailsServiceAutoConfiguration :
>
>
> Using generated security password:
> 994acd24-ee2a-4142-9514-90abd9626efc
>
> 2019-09-07 11:56:05.276 INFO 7692 --- [ main]
> o.s.s.web.DefaultSecurityFilterChain : Creating filter chain:
> org.springframework.security.web.util.matcher.AnyRequestMatcher#1,
> [org.sprin
> gframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#6cae2294,
> org.springf
> ramework.security.web.context.SecurityContextPersistenceFilter#590e1be3,
> org.springframework.securit y.web.header.HeaderWriterFilter#63e37b1a,
> org.springframework.security.web.csrf.CsrfFilter#7b9b58ea,
> org.springframework.security.web.authentication.logout.LogoutFilter#901cc19,
> org.springframework.se
> curity.web.authentication.UsernamePasswordAuthenticationFilter#5bb94950,
> org.springframework.securit
> y.web.authentication.ui.DefaultLoginPageGeneratingFilter#37b2a2c6,
> org.springframework.security.web.
> authentication.www.BasicAuthenticationFilter#12bbd6aa,
> org.springframework.security.web.savedrequest
> .RequestCacheAwareFilter#2dc9fd87,
> org.springframework.security.web.servletapi.SecurityContextHolder
> AwareRequestFilter#53ad8be3,
> org.springframework.security.web.authentication.AnonymousAuthentication
> Filter#f357d9f,
> org.springframework.security.web.session.SessionManagementFilter#1440774b,
> org.sprin
> gframework.security.web.access.ExceptionTranslationFilter#1186a99c,
> org.springframework.security.web
> .access.intercept.FilterSecurityInterceptor#625c3443] 2019-09-07
> 11:56:05.405 INFO 7692 --- [ main]
> o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX
> exposure on startup 2019-09-07 11:56:05.408 INFO 7692 --- [
> main] o.s.j.e.a.AnnotationMBeanExporter : Bean with name
> 'dataSource' has been autodetected for JMX exposure 2019-09-07
> 11:56:05.416 INFO 7692 --- [ main]
> o.s.j.e.a.AnnotationMBeanExporter : Located MBean 'dataSource':
> registering with JMX server as MBean
> [com.zaxxer.hikari:name=dataSource, type=HikariDataSource] 2019-09-07
> 11:56:05.465 INFO 7692 --- [ main]
> o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s):
> 8080 (http) with context path '' 2019-09-07 11:56:05.471 INFO 7692
> --- [ main] SpringBootRestMySqlApplication : Started SpringBootRestMySqlApplication in 6.497 seconds (JVM running
> for 10.695) 2019-09-07 11:58:53.412 INFO 7692 --- [nio-8080-exec-1]
> o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring
> FrameworkServlet 'dispatcherServlet' 2019-09-07 11:58:53.413 INFO
> 7692 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet :
> FrameworkServlet 'dispatcherServlet': initialization started
> 2019-09-07 11:58:53.454 INFO 7692 --- [nio-8080-exec-1]
> o.s.web.servlet.DispatcherServlet : FrameworkServlet
> 'dispatcherServlet': initialization completed in 40 ms
You have to delete or comment your dependency "Spring Security" in pom.xml. Because it automatically adds to your Application default /login page for authentication(without configurations and users to login(only generated "user" + "password in your console")).
In your project you have added
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
Now, In order to make it work for production you may have to configure it properly but since you are following tutorial, which leaves you with 2 options, i.e.
either remove this dependency or use default password which is being printed on console, every time you run your application.
Default username:
user
Default password:
Using generated security password:
994acd24-ee2a-4142-9514-90abd9626efc
NOTE this password changes every time you re-run your application and always check console logs for new password

Hikaripool : No new connections are added to the pool when old connections are expired

I have a Hikari pool for AzureSQL database. The pool size is 75. Initialy it add 75 conncetions to the pool, when connections are closed, its adding new connectiosn to the pool.
Pool configuration is shown below.
But after some time we see the total number of connections in the pool goes down. When connections are cloased no new connections are added to the pool. Finally the total number of connections reach ZERO.
Please refer the logs below. The logs clearly shows this behaviour.
Once total connection is ZERO, threads wait for connection and finally times out with error Connection is not available, request timed out after 30000ms
Below is the connection pool configuration.
MyConnectionPool - configuration:
allowPoolSuspension.............false
autoCommit......................true
catalog.........................none
connectionInitSql...............none
connectionTestQuery.............none
connectionTimeout...............30000
dataSource......................none
dataSourceClassName.............none
dataSourceJNDI..................none
dataSourceProperties............{password=<masked>}
driverClassName................."com.microsoft.sqlserver.jdbc.SQLServerDriver"
healthCheckProperties...........{}
healthCheckRegistry.............none
idleTimeout.....................300000
initializationFailFast..........true
initializationFailTimeout.......1
isolateInternalQueries..........false
jdbc4ConnectionTest.............false
jdbcUrl.........................jdbc:sqlserver://<SERVER>:<PORT>;database=<database>
leakDetectionThreshold..........0
maxLifetime.....................600000
maximumPoolSize.................100
metricRegistry..................none
metricsTrackerFactory...........none
minimumIdle.....................75
password........................<masked>
poolName........................"MyConnectionPool"
readOnly........................false
registerMbeans..................false
scheduledExecutor...............none
scheduledExecutorService........internal
schema..........................none
threadFactory...................internal
transactionIsolation............default
username........................"dbusername"
validationTimeout...............5000
Logs
2019-05-15 12:55:28.572 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133785 ClientConnectionId: 5a2d0116-a8f9-4cd2-aab7-3138db1ea627: (connection was evicted)
2019-05-15 12:55:28.615 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:28.661 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Added connection ConnectionID:133873 ClientConnectionId: 1e729748-5f80-4a88-8545-e0ca16d1a34b
2019-05-15 12:55:28.661 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - After adding stats (total=76, active=1, idle=75, waiting=0)
2019-05-15 12:55:42.192 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133798 ClientConnectionId: d3816606-8cb5-4b5e-8ca0-771fa877e2a2: (connection has passed maxLifetime)
2019-05-15 12:55:45.478 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133797 ClientConnectionId: b464f02e-eed6-4f33-9b2a-c69e894c1611: (connection has passed maxLifetime)
2019-05-15 12:55:45.484 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:45.530 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Added connection ConnectionID:133874 ClientConnectionId: a02ef032-3a41-429c-b328-b0cd9b74ba15
2019-05-15 12:55:46.482 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133796 ClientConnectionId: cb17a32d-234a-40a3-b0fb-617cac5158a1: (connection has passed maxLifetime)
2019-05-15 12:55:46.486 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:46.552 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Added connection ConnectionID:133875 ClientConnectionId: e3731254-71ac-4fd8-b304-19c71ebad2b6
2019-05-15 12:55:47.131 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133801 ClientConnectionId: 1357d6e1-20ce-4a84-8143-2fb095fa7dbb: (connection has passed maxLifetime)
2019-05-15 12:55:47.136 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:47.191 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Added connection ConnectionID:133876 ClientConnectionId: 6ba7a1e7-52df-4719-a95c-fb5d67711aaa
2019-05-15 12:55:48.834 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133800 ClientConnectionId: 3f872d11-1af4-47de-9e8c-51f8801ed750: (connection has passed maxLifetime)
2019-05-15 12:55:48.838 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:48.890 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Added connection ConnectionID:133877 ClientConnectionId: ae989f7c-d3fa-4ff0-9305-ebbbf393f558
2019-05-15 12:55:50.271 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Before cleanup stats (total=75, active=0, idle=75, waiting=0)
2019-05-15 12:55:50.271 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - After cleanup stats (total=75, active=0, idle=75, waiting=0)
2019-05-15 12:55:50.955 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133799 ClientConnectionId: c71a7b65-b306-4385-81f5-6f264feb2d7b: (connection has passed maxLifetime)
2019-05-15 12:55:50.959 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:51.026 DEBUG 17 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Added connection ConnectionID:133878 ClientConnectionId: 7db54029-a6a1-47e5-bae7-dc42c5a3cd5b
2019-05-15 12:55:53.232 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133803 ClientConnectionId: e67f2293-726c-4651-afdb-c5d86cbbf208: (connection has passed maxLifetime)
2019-05-15 12:55:53.237 INFO 17 --- [onnection adder] ContainerTrustManagerFactory$PKIXFactory : Adding System Trust Manager
2019-05-15 12:55:54.994 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133802 ClientConnectionId: 51fd57be-67a2-4a85-ad77-396e672a0bfb: (connection has passed maxLifetime)
2019-05-15 12:55:56.913 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133804 ClientConnectionId: b6c81baf-0bf6-4002-9075-f1311c5eeb47: (connection has passed maxLifetime)
2019-05-15 12:56:00.639 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133805 ClientConnectionId: 5b5abe04-23ac-468f-93be-1424b4f838ea: (connection has passed maxLifetime)
2019-05-15 12:56:07.080 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133806 ClientConnectionId: 052f89a1-c9ba-4e8d-81e8-af884b19a260: (connection has passed maxLifetime)
2019-05-15 12:56:07.515 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133807 ClientConnectionId: f5caa695-6c10-4edc-bf0d-9dc9a5bebbcd: (connection has passed maxLifetime)
2019-05-15 12:56:09.517 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133809 ClientConnectionId: 3eb98771-b145-4788-be3f-0fe1ed20bc2e: (connection has passed maxLifetime)
2019-05-15 12:56:11.496 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133808 ClientConnectionId: 3cb59458-f4ee-44f0-b89a-c041bab47f05: (connection has passed maxLifetime)
2019-05-15 12:56:19.231 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133811 ClientConnectionId: e9ef4658-e86d-4d9d-b277-fe184bf91e1b: (connection has passed maxLifetime)
2019-05-15 12:56:20.271 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Before cleanup stats (total=66, active=0, idle=66, waiting=0)
2019-05-15 12:56:20.272 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - After cleanup stats (total=66, active=0, idle=66, waiting=0)
2019-05-15 12:56:21.899 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133810 ClientConnectionId: 225d1741-f9ee-4eee-9781-6fedde1f58d1: (connection has passed maxLifetime)
2019-05-15 12:56:26.133 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133816 ClientConnectionId: 6d2519eb-d5d2-40ee-b21e-861c65469ae7: (connection has passed maxLifetime)
2019-05-15 12:56:28.199 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133812 ClientConnectionId: 9c6ed374-e198-49e4-993d-f941eafd90c8: (connection has passed maxLifetime)
2019-05-15 12:56:33.284 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133814 ClientConnectionId: c228ddbc-a681-4840-b030-b21c1784ab44: (connection has passed maxLifetime)
2019-05-15 12:56:34.444 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133817 ClientConnectionId: 166f9e66-dea6-4fdc-875e-8fd6a3e8d63e: (connection has passed maxLifetime)
2019-05-15 12:56:35.498 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133813 ClientConnectionId: 19c35284-2e48-4dd9-b15e-a399693da5f4: (connection has passed maxLifetime)
2019-05-15 12:56:38.204 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133815 ClientConnectionId: 55ec82d6-1768-4dec-a9c6-5066e5a49010: (connection has passed maxLifetime)
2019-05-15 12:56:43.244 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133818 ClientConnectionId: f6bd0e08-82e7-4a3b-855b-b687b637cc54: (connection has passed maxLifetime)
2019-05-15 12:56:43.401 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133821 ClientConnectionId: 2d231ef2-3bda-48d2-b273-bb6baa6460eb: (connection has passed maxLifetime)
2019-05-15 12:56:45.232 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133819 ClientConnectionId: f3d57bb6-1216-495a-ab96-66037700a410: (connection has passed maxLifetime)
2019-05-15 12:56:49.309 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133822 ClientConnectionId: 67e255ee-3b2e-4b60-8f92-29f392545365: (connection has passed maxLifetime)
2019-05-15 12:56:50.272 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Before cleanup stats (total=55, active=0, idle=55, waiting=0)
2019-05-15 12:56:50.272 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - After cleanup stats (total=55, active=0, idle=55, waiting=0)
2019-05-15 12:56:51.507 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133820 ClientConnectionId: a9f80ef0-4dd8-4e89-b825-5b00290f7139: (connection has passed maxLifetime)
2019-05-15 12:57:04.483 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133824 ClientConnectionId: 98796f7d-76e5-425c-a2eb-e43f7baf64df: (connection has passed maxLifetime)
2019-05-15 12:57:05.515 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133825 ClientConnectionId: 60f39474-91b3-4fa3-8659-5d20507be6eb: (connection has passed maxLifetime)
2019-05-15 12:57:06.123 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133823 ClientConnectionId: 890ea5e0-cb0e-4411-9d9f-829cb6d9cba6: (connection has passed maxLifetime)
2019-05-15 12:57:12.578 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133826 ClientConnectionId: 4742bb54-0efb-424b-a369-e3fb927ea2ab: (connection has passed maxLifetime)
2019-05-15 12:57:20.272 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Before cleanup stats (total=50, active=0, idle=50, waiting=0)
2019-05-15 12:57:20.273 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - After cleanup stats (total=50, active=0, idle=50, waiting=0)
2019-05-15 12:57:28.559 DEBUG 17 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : MyConnectionPool - Closing connection ConnectionID:133827 ClientConnectionId: d7a702e4-0a2e-4768-b402-23655fa1c9af: (connection has passed maxLifetime)
2019-05-15 12:57:50.273 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - Before cleanup stats (total=49, active=0, idle=49, waiting=0)
2019-05-15 12:57:50.273 DEBUG 17 --- [ool housekeeper] com.zaxxer.hikari.pool.HikariPool : MyConnectionPool - After cleanup stats (total=49, active=0, idle=49, waiting=0)
Wondering if anyone here has faced this issue before and can help me with the solution.

PIG creates file on Hadoop but cannot write to it

I am learning hadoop and created a simple pig script.
Reading a file works, but writing to another file does not.
My script runs fine, the DUMP f command shows me 10 records, as expected. But when I store the same relation to a file (store f into 'result.csv';), there are some odd messages on the console, and for some reason, in the end I have a result file with only the first 3 records.
My questions are:
What's the matter with the IOException, when reading worked and
writing worked at least partly?
Why does the console tell me Total records written : 0, when actually 3 records have been written?
Why didn't it store the 10 records, as expected?
My Script (it's just some sandbox playing)
cd /user/samples
c = load 'crimes.csv' using PigStorage(',')
as (ID:int,Case_Number:int,Date:chararray,Block:chararray,IUCR:chararray,Primary_Type,Description,LocationDescription,Arrest:boolean,Domestic,Beat,District,Ward,CommunityArea,FBICode,XCoordinate,YCoordinate,Year,UpdatedOn,Latitude,Longitude,Location);
c = LIMIT c 1000;
t = foreach c generate ID, Date, Arrest, Year;
f = FILTER t by Arrest==true;
f = LIMIT f 10;
dump f;
store f into 'result.csv';
part of the console output:
2016-07-21 15:55:07,435 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-07-21 15:55:07,537 [main] WARN org.apache.pig.tools.pigstats.mapreduce.MRJobStats - Unable to get job counters
java.io.IOException: java.io.IOException: java.net.ConnectException: Call From m1.hdp2/192.168.178.201 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getCounters(HadoopShims.java:132)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addCounters(MRJobStats.java:284)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:235)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:165)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:360)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:308)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1474)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1459)
at org.apache.pig.PigServer.execute(PigServer.java:1448)
at org.apache.pig.PigServer.access$500(PigServer.java:118)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1773)
at org.apache.pig.PigServer.registerQuery(PigServer.java:707)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1075)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:505)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:231)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:206)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:564)
at org.apache.pig.Main.main(Main.java:176)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: java.net.ConnectException: Call From m1.hdp2/192.168.178.201 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:343)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:428)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:572)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:184)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getCounters(HadoopShims.java:126)
... 24 more
Caused by: java.net.ConnectException: Call From m1.hdp2/192.168.178.201 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor18.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy14.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:324)
... 28 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 36 more
2016-07-21 15:55:07,540 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-07-21 15:55:07,571 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.2 0.16.0 hadoop 2016-07-21 15:50:17 2016-07-21 15:55:07 FILTER,LIMIT
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1469130571595_0001 3 1 n/a n/a n/a n/a n/a n/a n/a n/a c
job_1469130571595_0002 1 1 n/a n/a n/a n/a n/a n/a n/a n/a c,f,t hdfs://localhost:9000/user/samples/result.csv,
Input(s):
Successfully read 0 records from: "hdfs://localhost:9000/user/samples/crimes.csv"
Output(s):
Successfully stored 0 records in: "hdfs://localhost:9000/user/samples/result.csv"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1469130571595_0001 -> job_1469130571595_0002,
job_1469130571595_0002
2016-07-21 15:55:07,573 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2016-07-21 15:55:07,585 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-07-21 15:55:08,592 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

Hadoop 2.6.4 MR job quick freeze

Hadoop 2.6.4: 1 master + 2 slaves on AWS EC2
master: namenode, secondary namenode, resource manager
slave: datanode, node manager
When running a test MR job (wordcount), it freezes right away:
hduser#ip-172-31-4-108:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /data/shakespeare /data/out1
16/03/21 10:45:19 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-4-108/172.31.4.108:8032
16/03/21 10:45:21 INFO input.FileInputFormat: Total input paths to process : 5
16/03/21 10:45:21 INFO mapreduce.JobSubmitter: number of splits:5
16/03/21 10:45:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1458556970596_0001
16/03/21 10:45:22 INFO impl.YarnClientImpl: Submitted application application_1458556970596_0001
16/03/21 10:45:22 INFO mapreduce.Job: The url to track the job: http://ip-172-31-4-108:8088/proxy/application_1458556970596_0001/
16/03/21 10:45:22 INFO mapreduce.Job: Running job: job_1458556970596_0001
When running start-dfs.sh and start-yarn.sh on master, all daemons run succesfully (jps command) on corresponding EC2 instance.
Below Resource Manager log when launching MR job:
2016-03-21 10:45:20,152 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2016-03-21 10:45:22,784 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user hduser
2016-03-21 10:45:22,785 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1458556970596_0001
2016-03-21 10:45:22,787 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser IP=172.31.4.108 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1458556970596_0001
2016-03-21 10:45:22,788 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from NEW to NEW_SAVING
2016-03-21 10:45:22,805 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1458556970596_0001
2016-03-21 10:45:22,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from NEW_SAVING to SUBMITTED
2016-03-21 10:45:22,809 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1458556970596_0001 user: hduser leaf-queue of parent: root #applications: 1
2016-03-21 10:45:22,810 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1458556970596_0001 from user: hduser, in queue: default
2016-03-21 10:45:22,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from SUBMITTED to ACCEPTED
2016-03-21 10:45:22,866 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1458556970596_0001_000001
2016-03-21 10:45:22,867 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458556970596_0001_000001 State change from NEW to SUBMITTED
2016-03-21 10:45:22,896 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start
2016-03-21 10:45:22,896 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start
2016-03-21 10:45:22,897 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1458556970596_0001 from user: hduser activated in queue: default
2016-03-21 10:45:22,898 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1458556970596_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User#1d51055, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-03-21 10:45:22,898 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1458556970596_0001_000001 to scheduler from user hduser in queue default
2016-03-21 10:45:22,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458556970596_0001_000001 State change from SUBMITTED to SCHEDULED
Below NameNode log when launching MR job:
2016-03-21 10:45:03,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2016-03-21 10:45:03,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:45:20,613 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 7
2016-03-21 10:45:20,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar. BP-1804768821-172.31.4.108-1458553823105 blk_1073741834_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:21,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* checkFileProgress: blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} has not reached minimal replication 1
2016-03-21 10:45:21,292 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
2016-03-21 10:45:21,297 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 270356
2016-03-21 10:45:21,297 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741834_1010 size 270356
2016-03-21 10:45:21,706 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:21,714 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 2 to 10 for /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar
2016-03-21 10:45:21,812 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 2 to 10 for /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split
2016-03-21 10:45:21,823 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split. BP-1804768821-172.31.4.108-1458553823105 blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]}
2016-03-21 10:45:21,849 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]} size 0
2016-03-21 10:45:21,853 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]} size 0
2016-03-21 10:45:21,855 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:21,865 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.splitmetainfo. BP-1804768821-172.31.4.108-1458553823105 blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:21,876 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:21,877 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:21,880 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:22,277 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.xml. BP-1804768821-172.31.4.108-1458553823105 blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:22,327 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:22,328 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:22,332 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:33,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:45:33,747 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:46:03,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:46:03,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:46:33,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:46:33,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:47:03,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2016-03-21 10:47:03,750 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
Any ideas ? thank you in advance for your support !.
Below *-site.xml files content. Note: I've indeed applied some dimensioning results values to properties, but I still had the EXACT SAME issue with minimal configuration (only mandatory properties).
core-site.xml
<configuration>
<property><name>fs.defaultFS</name><value>hdfs://ip-172-31-4-108:8020</value></property>
</configuration>
hdfs-site.xml
<configuration>
<property><name>dfs.replication</name><value>2</value></property>
<property><name>dfs.namenode.name.dir</name><value>file:///xvda1/dfs/nn</value></property>
<property><name>dfs.datanode.data.dir</name><value>file:///xvda1/dfs/dn</value></property>
</configuration>
mapred-site.xml
<configuration>
<property><name>mapreduce.jobhistory.address</name><value>ip-172-31-4-108:10020</value></property>
<property><name>mapreduce.jobhistory.webapp.address</name><value>ip-172-31-4-108:19888</value></property>
<property><name>mapreduce.framework.name</name><value>yarn</value></property>
<property><name>mapreduce.map.memory.mb</name><value>512</value></property>
<property><name>mapreduce.reduce.memory.mb</name><value>1024</value></property>
<property><name>mapreduce.map.java.opts</name><value>410</value></property>
<property><name>mapreduce.reduce.java.opts</name><value>820</value></property>
</configuration>
yarn-site.xml
<configuration>
<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
<property><name>yarn.resourcemanager.hostname</name><value>ip-172-31-4-108</value></property>
<property><name>yarn.nodemanager.local-dirs</name><value>file:///xvda1/nodemgr/local</value></property>
<property><name>yarn.nodemanager.log-dirs</name><value>/var/log/hadoop-yarn/containers</value></property>
<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/var/log/hadoop-yarn/apps</value></property>
<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
<property><name>yarn.app.mapreduce.am.resource.mb</name><value>1024</value></property>
<property><name>yarn.app.mapreduce.am.command-opts</name><value>820</value></property>
<property><name>yarn.nodemanager.resource.memory-mb</name><value>6291456</value></property>
<property><name>yarn.scheduler.minimum_allocation-mb</name><value>524288</value></property>
<property><name>yarn.scheduler.maximum_allocation-mb</name><value>6291456</value></property>
</configuration>

Resources