Hadoop's ResourceManager fails to start when started through Ansible - hadoop

I am having trouble starting YARN (via ./start-yarn.sh) in pseudo-distributed mode when doing it through Ansible.
This is the part of my playbook that starts YARN (where hadoop_home is /home/hdoop/hadoop):
- name: hdoop > Start YARN
become: true
become_user: hdoop
environment:
HADOOP_HOME: "{{ hadoop_home }}"
HADOOP_HDFS_HOME: "{{ hadoop_home }}"
HADOOP_CONF_DIR: "{{ hadoop_home }}/etc/hadoop"
HADOOP_YARN_HOME: "{{ hadoop_home }}"
shell:
cmd: "{{ hadoop_home }}/sbin/start-yarn.sh"
executable: /bin/bash
Here is my yarn-site.xml file:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>0.0.0.0</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>0.0.0.0:8088</value>
</property>
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
This results in the NodeManager starting but not the ResourceManager. I get the following error:
STARTUP_MSG: build = Unknown -r 7a3bc90b05f257c8ace2f76d74264906f0f7a932; compiled by 'hexiaoqiao' on 2021-01-03T09:26Z
STARTUP_MSG: java = 1.8.0_282
************************************************************/
2021-02-20 18:27:03,040 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT]
2021-02-20 18:27:03,632 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/home/hdoop/hadoop/etc/hadoop/core-site.xml
2021-02-20 18:27:03,736 INFO org.apache.hadoop.conf.Configuration: resource-types.xml not found
2021-02-20 18:27:03,736 INFO org.apache.hadoop.yarn.util.resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-02-20 18:27:03,799 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/home/hdoop/hadoop/etc/hadoop/yarn-site.xml
2021-02-20 18:27:03,808 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher
2021-02-20 18:27:03,897 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms
2021-02-20 18:27:03,905 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms
2021-02-20 18:27:03,917 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: AMRMTokenKeyRollingInterval: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms
2021-02-20 18:27:03,965 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler
2021-02-20 18:27:03,970 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager
2021-02-20 18:27:03,970 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
2021-02-20 18:27:04,002 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.event.EventDispatcher
2021-02-20 18:27:04,003 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher
2021-02-20 18:27:04,005 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher
2021-02-20 18:27:04,005 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher
2021-02-20 18:27:04,081 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2021-02-20 18:27:04,231 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-02-20 18:27:04,231 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system started
2021-02-20 18:27:04,245 INFO org.apache.hadoop.yarn.security.YarnAuthorizationProvider: org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer is instantiated.
2021-02-20 18:27:04,248 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager
2021-02-20 18:27:04,254 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher
2021-02-20 18:27:04,255 INFO org.apache.hadoop.yarn.server.resourcemanager.RMNMInfo: Registered RMNMInfo MBean
2021-02-20 18:27:04,256 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.monitor.RMAppLifetimeMonitor: Application lifelime monitor interval set to 3000 ms.
2021-02-20 18:27:04,261 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.MultiNodeSortingManager: Initializing NodeSortingService=MultiNodeSortingManager
2021-02-20 18:27:04,262 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2021-02-20 18:27:04,285 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/home/hdoop/hadoop/etc/hadoop/capacity-scheduler.xml
2021-02-20 18:27:04,293 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler: Minimum allocation = <memory:1024, vCores:1>
2021-02-20 18:27:04,293 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler: Maximum allocation = <memory:8192, vCores:4>
2021-02-20 18:27:04,382 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined
2021-02-20 18:27:04,383 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined
2021-02-20 18:27:04,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, absoluteCapacity=1.0, maxCapacity=1.0, absoluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:*ADMINISTER_QUEUE:*, labels=*,
, reservationsContinueLooking=true, orderingPolicy=utilization, priority=0
2021-02-20 18:27:04,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root
2021-02-20 18:27:04,440 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined
2021-02-20 18:27:04,440 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined
2021-02-20 18:27:04,444 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing default
capacity = 1.0 [= (float) configuredCapacity / 100 ]
absoluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
maxCapacity = 1.0 [= configuredMaxCapacity ]
absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]
effectiveMinResource=<memory:0, vCores:0>
, effectiveMaxResource=<memory:0, vCores:0>
userLimit = 100 [= configuredUserLimit ]
userLimitFactor = 1.0 [= configuredUserLimitFactor ]
maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]
maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]
usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]
absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]
minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]
maximumAllocation = <memory:8192, vCores:4> [= configuredMaxAllocation ]
numContainers = 0 [= currentNumContainers ]
state = RUNNING [= configuredState ]
acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]
nodeLocalityDelay = 40
rackLocalityAdditionalDelay = -1
labels=*,
reservationsContinueLooking = true
preemptionDisabled = true
defaultAppPriorityPerQueue = 0
priority = 0
maxLifetime = -1 seconds
defaultLifetime = -1 seconds
2021-02-20 18:27:04,445 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0, effectiveMinResource=<memory:0, vCores:0> , effectiveMaxResource=<memory:0, vCores:0>
2021-02-20 18:27:04,445 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2021-02-20 18:27:04,452 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2021-02-20 18:27:04,452 INFO org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule: Initialized queue mappings, override: false
2021-02-20 18:27:04,452 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.WorkflowPriorityMappingsManager: Initialized workflow priority mappings, override: false
2021-02-20 18:27:04,453 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.MultiNodeSortingManager: MultiNode scheduling is 'false', and configured policies are
2021-02-20 18:27:04,454 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8192, vCores:4>>, asynchronousScheduling=false, asyncScheduleInterval=5ms,multiNodePlacementEnabled=false
2021-02-20 18:27:04,476 INFO org.apache.hadoop.conf.Configuration: dynamic-resources.xml not found
2021-02-20 18:27:04,480 INFO org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain: Initializing AMS Processing chain. Root Processor=[org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor].
2021-02-20 18:27:04,480 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: disabled placement handler will be used, all scheduling requests will be rejected.
2021-02-20 18:27:04,481 INFO org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain: Adding [org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.DisabledPlacementProcessor] tp top of AMS Processing chain.
2021-02-20 18:27:04,502 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: TimelineServicePublisher is not configured
2021-02-20 18:27:04,584 INFO org.eclipse.jetty.util.log: Logging initialized #2275ms to org.eclipse.jetty.util.log.Slf4jLog
2021-02-20 18:27:04,943 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2021-02-20 18:27:04,980 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
2021-02-20 18:27:05,006 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2021-02-20 18:27:05,018 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
2021-02-20 18:27:05,018 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
2021-02-20 18:27:05,018 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
2021-02-20 18:27:05,018 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
2021-02-20 18:27:05,018 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2021-02-20 18:27:05,018 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2021-02-20 18:27:05,021 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /cluster/*
2021-02-20 18:27:05,021 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2021-02-20 18:27:05,021 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /app/*
2021-02-20 18:27:06,588 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2021-02-20 18:27:06,625 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8088
2021-02-20 18:27:06,628 INFO org.eclipse.jetty.server.Server: jetty-9.4.20.v20190813; built: 2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
2021-02-20 18:27:06,716 INFO org.eclipse.jetty.server.session: DefaultSessionIdManager workerName=node0
2021-02-20 18:27:06,716 INFO org.eclipse.jetty.server.session: No SessionScavenger set, using defaults
2021-02-20 18:27:06,721 INFO org.eclipse.jetty.server.session: node0 Scavenging every 660000ms
2021-02-20 18:27:06,763 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2021-02-20 18:27:06,798 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2021-02-20 18:27:06,819 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2021-02-20 18:27:06,836 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2021-02-20 18:27:06,906 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler#6cb6decd{logs,/logs,file:///home/hdoop/hadoop/logs/,AVAILABLE}
2021-02-20 18:27:06,915 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler#30bce90b{static,/static,jar:file:/home/hdoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.2.jar!/webapps/static,AVAILABLE}
2021-02-20 18:27:06,935 WARN org.eclipse.jetty.webapp.WebInfConfiguration: Can't generate resourceBase as part of webapp tmp dir name: java.lang.NullPointerException
2021-02-20 18:27:07,382 INFO org.eclipse.jetty.util.TypeUtil: JVM Runtime does not support Modules
2021-02-20 18:27:12,831 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: RECEIVED SIGNAL 1: SIGHUP
2021-02-20 18:27:15,009 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext#4016ccc1{cluster,/,file:///tmp/jetty-0_0_0_0-8088-_-any-3811097377054804183.dir/webapp/,AVAILABLE}{jar:file:/home/hdoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.2.jar!/webapps/cluster}
2021-02-20 18:27:15,048 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector#1bd39d3c{HTTP/1.1,[http/1.1]}{0.0.0.0:8088}
2021-02-20 18:27:15,048 INFO org.eclipse.jetty.server.Server: Started #12739ms
2021-02-20 18:27:15,048 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app cluster started at 8088
2021-02-20 18:27:15,277 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 100, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2021-02-20 18:27:15,327 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8033
2021-02-20 18:27:15,771 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server
2021-02-20 18:27:15,773 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to active state
2021-02-20 18:27:15,772 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2021-02-20 18:27:15,773 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8033: starting
2021-02-20 18:27:15,818 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating AMRMToken
2021-02-20 18:27:15,821 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
2021-02-20 18:27:15,822 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
2021-02-20 18:27:15,822 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2021-02-20 18:27:15,822 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 1
2021-02-20 18:27:15,822 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2021-02-20 18:27:15,833 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2021-02-20 18:27:15,833 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2021-02-20 18:27:15,834 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 2
2021-02-20 18:27:15,834 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2021-02-20 18:27:15,835 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler
2021-02-20 18:27:16,227 INFO org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore: Created store directory :file:/tmp/hadoop-yarn-hdoop/node-attribute
2021-02-20 18:27:16,297 INFO org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore: Finished write mirror at:file:/tmp/hadoop-yarn-hdoop/node-attribute/nodeattribute.mirror
2021-02-20 18:27:16,297 INFO org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore: Finished create editlog file at:file:/tmp/hadoop-yarn-hdoop/node-attribute/nodeattribute.editlog
2021-02-20 18:27:16,319 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl$ForwardingEventHandler
2021-02-20 18:27:16,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.MultiNodeSortingManager: Starting NodeSortingService=MultiNodeSortingManager
2021-02-20 18:27:16,363 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 5000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2021-02-20 18:27:16,364 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8031
2021-02-20 18:27:16,384 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server
2021-02-20 18:27:16,394 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2021-02-20 18:27:16,434 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8031: starting
2021-02-20 18:27:16,455 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2021-02-20 18:27:16,496 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 5000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2021-02-20 18:27:16,501 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8030
2021-02-20 18:27:16,528 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
2021-02-20 18:27:16,529 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2021-02-20 18:27:16,529 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8030: starting
2021-02-20 18:27:16,721 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 5000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2021-02-20 18:27:16,722 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8032
2021-02-20 18:27:16,725 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server
2021-02-20 18:27:16,742 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
2021-02-20 18:27:16,763 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
2021-02-20 18:27:16,763 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2021-02-20 18:27:16,766 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2021-02-20 18:27:16,769 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext#4016ccc1{cluster,/,null,UNAVAILABLE}{jar:file:/home/hdoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.2.jar!/webapps/cluster}
2021-02-20 18:27:16,785 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector#1bd39d3c{HTTP/1.1,[http/1.1]}{0.0.0.0:8088}
2021-02-20 18:27:16,786 INFO org.eclipse.jetty.server.session: node0 Stopped scavenging
2021-02-20 18:27:16,791 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#30bce90b{static,/static,jar:file:/home/hdoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.2.jar!/webapps/static,UNAVAILABLE}
2021-02-20 18:27:16,791 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#6cb6decd{logs,/logs,file:///home/hdoop/hadoop/logs/,UNAVAILABLE}
2021-02-20 18:27:16,795 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032
2021-02-20 18:27:16,811 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8032
2021-02-20 18:27:16,821 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2021-02-20 18:27:16,822 INFO org.apache.hadoop.ipc.Server: Stopping server on 8033
2021-02-20 18:27:16,822 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8033
2021-02-20 18:27:16,825 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
2021-02-20 18:27:16,825 WARN org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher: org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning.
2021-02-20 18:27:16,826 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2021-02-20 18:27:16,827 INFO org.apache.hadoop.ipc.Server: Stopping server on 8030
2021-02-20 18:27:16,839 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8030
2021-02-20 18:27:16,841 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2021-02-20 18:27:16,847 INFO org.apache.hadoop.ipc.Server: Stopping server on 8031
2021-02-20 18:27:16,862 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8031
2021-02-20 18:27:16,863 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2021-02-20 18:27:16,863 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: NMLivelinessMonitor thread interrupted
2021-02-20 18:27:16,868 ERROR org.apache.hadoop.yarn.event.EventDispatcher: Returning, interrupted : java.lang.InterruptedException
2021-02-20 18:27:16,868 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager thread interrupted
2021-02-20 18:27:16,869 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, ignoring any new events.
2021-02-20 18:27:16,870 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, ignoring any new events.
2021-02-20 18:27:16,870 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: org.apache.hadoop.yarn.server.resourcemanager.rmapp.monitor.RMAppLifetimeMonitor thread interrupted
2021-02-20 18:27:16,873 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted
2021-02-20 18:27:16,873 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted
2021-02-20 18:27:16,874 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer thread interrupted
2021-02-20 18:27:16,875 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2021-02-20 18:27:16,878 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ResourceManager metrics system...
2021-02-20 18:27:16,879 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system stopped.
2021-02-20 18:27:16,879 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system shutdown complete.
2021-02-20 18:27:16,879 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, ignoring any new events.
2021-02-20 18:27:16,881 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state
2021-02-20 18:27:16,882 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down ResourceManager at hadoop-1/127.0.1.1
************************************************************/
At this point, a call to jps tells me that ResourceManager was not started:
hdoop#hadoop-1:~/hadoop/logs$ jps
18372 DataNode
18537 SecondaryNameNode
18282 NameNode
19549 Jps
18927 NodeManager
The curious part is that when I start it myself through a terminal, it runs just fine.
Does anyone know what's going on? Or why ResourceManager refuses to start under Ansible?

yarn executable in $HADOOP_HOME/bin/yarn should be used.
Eg.:
- name: Start Yarn Cluster
command: '{{ item }}'
with_items:
- "nohup {{ HADOOP_HOME }}/bin/yarn --daemon start resourcemanager"
- "nohup {{ HADOOP_HOME }}/bin/yarn --daemon start nodemanager"
become: yes
become_method: su
become_user: ubuntu
environment: "{{ proxy_env }}"

Related

Spring boot always try to reconnect the failed node in Redis cluster enviroment

I have a redis cluster with 3 shards. Each shard has 2 nodes, 1 primary and 1 replica. I'm using spring-boot 2.0.1. Final and following is the configuration and code im using to create redis cluster connect.
pom.xml:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependencies>
application.yml:
redis:
cluster:
nodes: 172.18.0.155:7010,172.18.0.155:7011,172.18.0.155:7012,172.18.0.156:7020,172.18.0.156:7021,172.18.0.156:7022
max-redirects: 3
timeout: 5000
lettuce:
pool:
max-active: 200
max-idle: 8
min-idle: 0
max-wait: 1000
database: 0
RedisConfig.java:
package com.central.redis.config;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.RedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import com.central.redis.config.util.RedisObjectSerializer;
#Configuration
public class RedisConfig {
#Primary
#Bean("redisTemplate")
#ConditionalOnProperty(name = "spring.redis.cluster.nodes", matchIfMissing = false)
public RedisTemplate<String, Object> getRedisTemplate(RedisConnectionFactory factory) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(factory);
RedisSerializer stringSerializer = new StringRedisSerializer();
// RedisSerializer redisObjectSerializer = new RedisObjectSerializer();
RedisSerializer redisObjectSerializer = new RedisObjectSerializer();
redisTemplate.setKeySerializer(stringSerializer);
redisTemplate.setHashKeySerializer(stringSerializer);
redisTemplate.setValueSerializer(redisObjectSerializer);
redisTemplate.afterPropertiesSet();
redisTemplate.opsForValue().set("hello", "wolrd");
return redisTemplate;
}
#Primary
#Bean("redisTemplate")
#ConditionalOnProperty(name = "spring.redis.host", matchIfMissing = true)
public RedisTemplate<String, Object> getSingleRedisTemplate(RedisConnectionFactory factory) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(factory);
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setValueSerializer(new RedisObjectSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
}
We recently had an issue where one of the primary nodes in a shard of the cluster had problem, which triggered failover. So the shard had two nodes 001 (primary) and 002 (replica). 001 failed-over, and 002 became primary.
And then, the app will try to reconnect the failed nodes. And it caused the redis cluster visit to failed. My assumption was that even if one node was failed, it should automatically refreshed the topology and connect the new master node. But it didn't.
Here is the logs:
2018-11-03 17:46:21.992 [main] INFO org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located managed bean 'environmentManager': registering with JMX server as MBean [org.springframework.cloud.context.environment:name=environmentManager,type=EnvironmentManager]
2018-11-03 17:46:22.015 [main] INFO org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located MBean 'dataSourceLog': registering with JMX server as MBean [com.alibaba.druid.spring.boot.autoconfigure:name=dataSourceLog,type=DruidDataSourceWrapper]
2018-11-03 17:46:22.022 [main] INFO org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located managed bean 'refreshScope': registering with JMX server as MBean [org.springframework.cloud.context.scope.refresh:name=refreshScope,type=RefreshScope]
2018-11-03 17:46:22.055 [main] INFO org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located managed bean 'configurationPropertiesRebinder': registering with JMX server as MBean [org.springframework.cloud.context.properties:name=configurationPropertiesRebinder,context=70a9f84e,type=ConfigurationPropertiesRebinder]
2018-11-03 17:46:22.075 [main] INFO org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located MBean 'dataSourceCore': registering with JMX server as MBean [com.alibaba.druid.spring.boot.autoconfigure:name=dataSourceCore,type=DruidDataSourceWrapper]
2018-11-03 17:46:22.078 [main] INFO org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located MBean 'statFilter': registering with JMX server as MBean [com.alibaba.druid.filter.stat:name=statFilter,type=StatFilter]
2018-11-03 17:46:22.101 [main] INFO org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase 0
2018-11-03 17:46:22.136 [main] INFO org.springframework.cloud.netflix.eureka.InstanceInfoFactory - Setting initial instance status as: STARTING
2018-11-03 17:46:22.203 [main] INFO com.netflix.discovery.DiscoveryClient - Initializing Eureka in region us-east-1
2018-11-03 17:46:22.317 [main] INFO com.netflix.discovery.provider.DiscoveryJerseyProvider - Using JSON encoding codec LegacyJacksonJson
2018-11-03 17:46:22.317 [main] INFO com.netflix.discovery.provider.DiscoveryJerseyProvider - Using JSON decoding codec LegacyJacksonJson
2018-11-03 17:46:22.557 [main] INFO com.netflix.discovery.provider.DiscoveryJerseyProvider - Using XML encoding codec XStreamXml
2018-11-03 17:46:22.558 [main] INFO com.netflix.discovery.provider.DiscoveryJerseyProvider - Using XML decoding codec XStreamXml
2018-11-03 17:46:23.144 [main] INFO com.netflix.discovery.shared.resolver.aws.ConfigClusterResolver - Resolving eureka endpoints via configuration
2018-11-03 17:46:23.188 [main] INFO com.netflix.discovery.DiscoveryClient - Disable delta property : false
2018-11-03 17:46:23.189 [main] INFO com.netflix.discovery.DiscoveryClient - Single vip registry refresh property : null
2018-11-03 17:46:23.189 [main] INFO com.netflix.discovery.DiscoveryClient - Force full registry fetch : false
2018-11-03 17:46:23.189 [main] INFO com.netflix.discovery.DiscoveryClient - Application is null : false
2018-11-03 17:46:23.189 [main] INFO com.netflix.discovery.DiscoveryClient - Registered Applications size is zero : true
2018-11-03 17:46:23.189 [main] INFO com.netflix.discovery.DiscoveryClient - Application version is -1: true
2018-11-03 17:46:23.189 [main] INFO com.netflix.discovery.DiscoveryClient - Getting all instance registry info from the eureka server
2018-11-03 17:46:23.578 [main] INFO com.netflix.discovery.DiscoveryClient - The response status is 200
2018-11-03 17:46:23.587 [main] INFO com.netflix.discovery.DiscoveryClient - Starting heartbeat executor: renew interval is: 10
2018-11-03 17:46:23.596 [main] INFO com.netflix.discovery.InstanceInfoReplicator - InstanceInfoReplicator onDemand update allowed rate per min is 4
2018-11-03 17:46:23.602 [main] INFO com.netflix.discovery.DiscoveryClient - Discovery Client initialized at timestamp 1541238383601 with initial instances count: 7
2018-11-03 17:46:23.622 [main] INFO org.springframework.cloud.netflix.eureka.serviceregistry.EurekaServiceRegistry - Registering application AUTH-SERVER with eureka with status UP
2018-11-03 17:46:23.624 [main] INFO com.netflix.discovery.DiscoveryClient - Saw local status change event StatusChangeEvent [timestamp=1541238383623, current=UP, previous=STARTING]
2018-11-03 17:46:23.634 [DiscoveryClient-InstanceInfoReplicator-0] INFO com.netflix.discovery.DiscoveryClient - DiscoveryClient_AUTH-SERVER/auth-server:172.18.0.153:8000 : registering service...
2018-11-03 17:46:23.636 [main] INFO org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase 2147483647
2018-11-03 17:46:23.637 [main] INFO springfox.documentation.spring.web.plugins.DocumentationPluginsBootstrapper - Context refreshed
2018-11-03 17:46:23.706 [main] INFO springfox.documentation.spring.web.plugins.DocumentationPluginsBootstrapper - Found 1 custom documentation plugin(s)
2018-11-03 17:46:23.720 [DiscoveryClient-InstanceInfoReplicator-0] INFO com.netflix.discovery.DiscoveryClient - DiscoveryClient_AUTH-SERVER/auth-server:172.18.0.153:8000 - registration status: 204
2018-11-03 17:46:23.908 [DiscoveryClient-InstanceInfoReplicator-0] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited
2018-11-03 17:46:23.940 [main] INFO springfox.documentation.spring.web.scanners.ApiListingReferenceScanner - Scanning for api listing references
2018-11-03 17:46:24.391 [main] INFO springfox.documentation.spring.web.readers.operation.CachingOperationNameGenerator - Generating unique operation named: rolesUsingGET_1
2018-11-03 17:46:24.525 [main] INFO springfox.documentation.spring.web.readers.operation.CachingOperationNameGenerator - Generating unique operation named: getUserTokenInfoUsingPOST_1
2018-11-03 17:46:24.559 [main] INFO springfox.documentation.spring.web.readers.operation.CachingOperationNameGenerator - Generating unique operation named: deleteUsingDELETE_1
2018-11-03 17:46:24.578 [main] INFO springfox.documentation.spring.web.readers.operation.CachingOperationNameGenerator - Generating unique operation named: saveOrUpdateUsingPOST_1
2018-11-03 17:46:24.893 [DiscoveryClient-InstanceInfoReplicator-0] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} inited
2018-11-03 17:46:24.947 [main] INFO org.springframework.scheduling.annotation.ScheduledAnnotationBeanPostProcessor - No TaskScheduler/ScheduledExecutorService bean found for scheduled processing
2018-11-03 17:46:24.969 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8000"]
2018-11-03 17:46:24.971 [main] INFO org.apache.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
2018-11-03 17:46:25.027 [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8000 (http) with context path ''
2018-11-03 17:46:25.029 [main] INFO org.springframework.cloud.netflix.eureka.serviceregistry.EurekaAutoServiceRegistration - Updating port to 8000
2018-11-03 17:46:25.034 [main] INFO com.central.OpenAuthServerApp - Started OpenAuthServerApp in 31.95 seconds (JVM running for 33.267)
2018-11-03 17:48:47.239 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was /172.18.0.156:7022
2018-11-03 17:48:47.239 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was /172.18.0.156:7022
2018-11-03 17:48:56.236 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:48:56.236 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:49:04.436 [lettuce-eventExecutorLoop-1-3] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:49:04.436 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:49:20.835 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:49:20.835 [lettuce-eventExecutorLoop-1-3] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:49:50.935 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:49:50.935 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:50:21.035 [lettuce-eventExecutorLoop-1-3] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:50:21.037 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:50:51.135 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:50:51.135 [lettuce-eventExecutorLoop-1-3] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:51:21.235 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:51:21.236 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:51:23.191 [AsyncResolver-bootstrap-executor-0] INFO com.netflix.discovery.shared.resolver.aws.ConfigClusterResolver - Resolving eureka endpoints via configuration
2018-11-03 17:51:51.335 [lettuce-eventExecutorLoop-1-3] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:51:51.335 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:52:21.435 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:52:21.435 [lettuce-eventExecutorLoop-1-3] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:52:51.535 [lettuce-eventExecutorLoop-1-1] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
2018-11-03 17:52:51.535 [lettuce-eventExecutorLoop-1-2] INFO io.lettuce.core.protocol.ConnectionWatchdog - Reconnecting, last destination was 172.18.0.156:7022
As you can see, there are not any error logs here. Just some lettuce ConnectionWatchdog reconnecting info, when one primary node failed. I know the reconnect behaviou is normal. But why it could affect access to the redis cluster?
Has anyone met this problem before? Did I miss any important things?

Docker compose up slow

I have a spring boot/cloud micro-service application which is similar here. And have docker compose to start the service. Now this was running fast about two weeks back and the docker-compose up used to take me 2-3 minutes to start all the application.
But now it's taking me nearly 15 minutes to start the application. I am sure I haven't made any changes in docker configuration is last few weeks.
Any help is appreciable.
I am on Ubuntu 16.04
Docker version 17.09.1-ce, build 19e2cf6
docker-compose version 1.17.0, build ac53b73
Here's my docker-compose.yml
version: '2.1'
services:
config-service:
image: test/config-service
mem_limit: 1073741824
restart: unless-stopped
volumes:
- ~/logs/config-service:/app/logs
healthcheck:
test: ["CMD", "curl", "-I", "http://config-service:8888"]
interval: 5s
timeout: 3s
retries: 5
ports:
- 8888:8888
depends_on:
mongo-db:
condition: service_healthy
eureka-service:
image: test/eureka-service
mem_limit: 1073741824
restart: unless-stopped
volumes:
- ~/logs/eureka-service:/app/logs
healthcheck:
test: ["CMD", "curl", "-I", "http://eureka-service:8761"]
interval: 5s
timeout: 3s
retries: 5
ports:
- 8761:8761
depends_on:
config-service:
condition: service_healthy
user-service:
image: test/user-service
mem_limit: 1073741824
restart: unless-stopped
volumes:
- ~/logs/user-service:/app/logs
ports:
- 8080:8080
depends_on:
eureka-service:
condition: service_healthy
Tomcat log (trimmed as I reached max word limit)
2018-01-18 08:23:00,467 INFO [main] com.test.userservice.UserServiceApplication : The following profiles are active: docker
2018-01-18 08:23:00,636 INFO [main] org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#56aac163: startup date [Thu Jan 18 08:23:00 GMT 2018]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#e2d56bf
2018-01-18 08:23:11,079 INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory : Overriding bean definition for bean 'managementServletContext' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.actuate.autoconfigure.EndpointWebMvcHypermediaManagementContextConfiguration; factoryMethodName=managementServletContext; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/actuate/autoconfigure/EndpointWebMvcHypermediaManagementContextConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration; factoryMethodName=managementServletContext; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/actuate/autoconfigure/EndpointWebMvcAutoConfiguration.class]]
2018-01-18 08:23:14,462 INFO [main] org.springframework.cloud.context.scope.GenericScope : BeanFactory id=ebfc4c4a-204e-3d20-ab06-5ee31c9ae744
2018-01-18 08:23:14,709 INFO [main] org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2018-01-18 08:23:16,511 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.sleuth.instrument.async.AsyncDefaultAutoConfiguration$DefaultAsyncConfigurerSupport' of type [org.springframework.cloud.sleuth.instrument.async.AsyncDefaultAutoConfiguration$DefaultAsyncConfigurerSupport$$EnhancerBySpringCGLIB$$bf6fe615] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:17,760 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.sleuth.annotation.SleuthAnnotationAutoConfiguration' of type [org.springframework.cloud.sleuth.annotation.SleuthAnnotationAutoConfiguration$$EnhancerBySpringCGLIB$$84938e84] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:20,662 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'sleuthAdvisorConfig' of type [org.springframework.cloud.sleuth.annotation.SleuthAdvisorConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:20,909 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'security.oauth2.client-org.springframework.boot.autoconfigure.security.oauth2.OAuth2ClientProperties' of type [org.springframework.boot.autoconfigure.security.oauth2.OAuth2ClientProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:21,116 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.boot.autoconfigure.security.oauth2.OAuth2AutoConfiguration' of type [org.springframework.boot.autoconfigure.security.oauth2.OAuth2AutoConfiguration$$EnhancerBySpringCGLIB$$39780452] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:21,561 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'resourceServerProperties' of type [org.springframework.boot.autoconfigure.security.oauth2.resource.ResourceServerProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:21,985 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.netflix.metrics.MetricsInterceptorConfiguration$MetricsRestTemplateConfiguration' of type [org.springframework.cloud.netflix.metrics.MetricsInterceptorConfiguration$MetricsRestTemplateConfiguration$$EnhancerBySpringCGLIB$$b45aee93] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:22,516 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$9e484b4f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:23,191 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.sleuth.instrument.async.AsyncDefaultAutoConfiguration' of type [org.springframework.cloud.sleuth.instrument.async.AsyncDefaultAutoConfiguration$$EnhancerBySpringCGLIB$$6a11f471] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:23,576 INFO [main] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.sleuth.instrument.web.client.TraceWebClientAutoConfiguration$TraceOAuthConfiguration' of type [org.springframework.cloud.sleuth.instrument.web.client.TraceWebClientAutoConfiguration$TraceOAuthConfiguration$$EnhancerBySpringCGLIB$$c72adab1] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-18 08:23:30,323 INFO [main] org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
2018-01-18 08:23:30,454 INFO [main] org.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-01-18 08:23:30,460 INFO [main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.23
2018-01-18 08:23:30,949 INFO [localhost-startStop-1] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-01-18 08:23:30,971 INFO [localhost-startStop-1] org.springframework.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 30335 ms
2018-01-18 08:23:42,305 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'metricsFilter' to: [/*]
2018-01-18 08:23:42,308 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-01-18 08:23:42,308 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'traceFilter' to: [/*]
2018-01-18 08:23:42,308 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-01-18 08:23:42,308 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-01-18 08:23:42,308 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-01-18 08:23:42,310 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2018-01-18 08:23:42,310 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'webRequestLoggingFilter' to: [/*]
2018-01-18 08:23:42,310 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.FilterRegistrationBean : Mapping filter: 'applicationContextIdFilter' to: [/*]
2018-01-18 08:23:42,310 INFO [localhost-startStop-1] org.springframework.boot.web.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
2018-01-18 08:24:01,859 INFO [main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[mongo-db:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2018-01-18 08:24:01,875 INFO [main] org.mongodb.driver.cluster : Adding discovered server mongo-db:27017 to client view of cluster
2018-01-18 08:24:07,610 INFO [cluster-ClusterId{value='5a6059a0b3cb1f00072c45c2', description='null'}-mongo-db:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:28}] to mongo-db:27017
2018-01-18 08:24:07,613 INFO [cluster-ClusterId{value='5a6059a0b3cb1f00072c45c2', description='null'}-mongo-db:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=mongo-db:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 10]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=577257}
2018-01-18 08:24:07,614 INFO [cluster-ClusterId{value='5a6059a0b3cb1f00072c45c2', description='null'}-mongo-db:27017] org.mongodb.driver.cluster : Discovered cluster type of STANDALONE
2018-01-18 08:28:53,270 WARN [main] com.netflix.config.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2018-01-18 08:28:54,188 INFO [main] com.netflix.config.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2018-01-18 08:28:56,203 WARN [main] com.netflix.config.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2018-01-18 08:28:56,204 INFO [main] com.netflix.config.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2018-01-18 08:29:37,452 INFO [main] org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter : Looking for #ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#56aac163: startup date [Thu Jan 18 08:23:00 GMT 2018]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#e2d56bf
2018-01-18 08:29:39,074 INFO [main] org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter : Detected ResponseBodyAdvice bean in org.springframework.boot.actuate.autoconfigure.EndpointWebMvcHypermediaManagementContextConfiguration$ActuatorEndpointLinksAdvice
2018-01-18 08:29:54,522 INFO [main] org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-01-18 08:29:54,535 INFO [main] org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2018-01-18 08:30:02,532 INFO [main] org.springframework.web.servlet.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-01-18 08:30:02,532 INFO [main] org.springframework.web.servlet.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-01-18 08:30:05,657 INFO [main] org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver : Detected #ExceptionHandler methods in exceptionHandler
2018-01-18 08:30:05,657 INFO [main] org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver : Detected ResponseBodyAdvice implementation in org.springframework.boot.actuate.autoconfigure.EndpointWebMvcHypermediaManagementContextConfiguration$ActuatorEndpointLinksAdvice
2018-01-18 08:30:26,080 INFO [main] org.springframework.web.servlet.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-01-18 08:33:58,388 INFO [main] org.springframework.security.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.boot.actuate.autoconfigure.ManagementWebSecurityAutoConfiguration$LazyEndpointPathRequestMatcher#723ed581, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#6b760460, org.springframework.security.web.context.SecurityContextPersistenceFilter#7af1cd63, org.springframework.security.web.header.HeaderWriterFilter#667e34b1, org.springframework.web.filter.CorsFilter#796065aa, org.springframework.security.web.authentication.logout.LogoutFilter#62515a47, org.springframework.security.web.authentication.www.BasicAuthenticationFilter#6dc1484, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#3c2772d1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#89c65d5, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#28a6301f, org.springframework.security.web.session.SessionManagementFilter#6dba847b, org.springframework.security.web.access.ExceptionTranslationFilter#45673f68, org.springframework.security.web.access.intercept.FilterSecurityInterceptor#5a5c128]
2018-01-18 08:34:00,867 INFO [main] org.springframework.security.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher#1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#766a4535, org.springframework.security.web.context.SecurityContextPersistenceFilter#7351a16e, org.springframework.security.web.header.HeaderWriterFilter#38eb2fb0, org.springframework.security.web.authentication.logout.LogoutFilter#4074023c, org.springframework.security.oauth2.provider.authentication.OAuth2AuthenticationProcessingFilter#562457e1, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#5bb7643d, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#3ac04654, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#63718b93, org.springframework.security.web.session.SessionManagementFilter#4567e53d, org.springframework.security.web.access.ExceptionTranslationFilter#336365bc, org.springframework.security.web.access.intercept.FilterSecurityInterceptor#3721177d]
2018-01-18 08:34:01,272 INFO [main] org.springframework.security.web.DefaultSecurityFilterChain : Creating filter chain: OrRequestMatcher [requestMatchers=[Ant [pattern='/**']]], [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#35835fa, org.springframework.security.web.context.SecurityContextPersistenceFilter#2df3c564, org.springframework.security.web.header.HeaderWriterFilter#aac3f4e, org.springframework.security.web.authentication.logout.LogoutFilter#518cf84a, org.springframework.security.web.authentication.www.BasicAuthenticationFilter#62e7dffa, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#2715644a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#4c2869a9, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#56f71edb, org.springframework.security.web.session.SessionManagementFilter#1f38957, org.springframework.security.web.access.ExceptionTranslationFilter#7a2e0858, org.springframework.security.web.access.intercept.FilterSecurityInterceptor#b8a7e43]
2018-01-18 08:34:43,897 INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler : Initializing ExecutorService
2018-01-18 08:34:52,517 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-01-18 08:34:53,014 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Bean with name 'environmentManager' has been autodetected for JMX exposure
2018-01-18 08:34:53,016 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Bean with name 'configurationPropertiesRebinder' has been autodetected for JMX exposure
2018-01-18 08:34:53,016 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Bean with name 'refreshEndpoint' has been autodetected for JMX exposure
2018-01-18 08:34:53,154 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Bean with name 'restartEndpoint' has been autodetected for JMX exposure
2018-01-18 08:34:53,155 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Bean with name 'serviceRegistryEndpoint' has been autodetected for JMX exposure
2018-01-18 08:34:53,165 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Bean with name 'refreshScope' has been autodetected for JMX exposure
2018-01-18 08:34:53,360 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Located managed bean 'environmentManager': registering with JMX server as MBean [org.springframework.cloud.context.environment:name=environmentManager,type=EnvironmentManager]
2018-01-18 08:34:54,090 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Located managed bean 'restartEndpoint': registering with JMX server as MBean [org.springframework.cloud.context.restart:name=restartEndpoint,type=RestartEndpoint]
2018-01-18 08:34:54,190 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Located managed bean 'serviceRegistryEndpoint': registering with JMX server as MBean [org.springframework.cloud.client.serviceregistry.endpoint:name=serviceRegistryEndpoint,type=ServiceRegistryEndpoint]
2018-01-18 08:34:54,325 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Located managed bean 'refreshScope': registering with JMX server as MBean [org.springframework.cloud.context.scope.refresh:name=refreshScope,type=RefreshScope]
2018-01-18 08:34:54,561 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Located managed bean 'configurationPropertiesRebinder': registering with JMX server as MBean [org.springframework.cloud.context.properties:name=configurationPropertiesRebinder,context=56aac163,type=ConfigurationPropertiesRebinder]
2018-01-18 08:34:54,699 INFO [main] org.springframework.jmx.export.annotation.AnnotationMBeanExporter : Located managed bean 'refreshEndpoint': registering with JMX server as MBean [org.springframework.cloud.endpoint:name=refreshEndpoint,type=RefreshEndpoint]
2018-01-18 08:34:58,144 INFO [main] org.springframework.context.support.DefaultLifecycleProcessor : Starting beans in phase 0
2018-01-18 08:34:58,540 INFO [main] org.springframework.cloud.netflix.eureka.InstanceInfoFactory : Setting initial instance status as: STARTING
2018-01-18 08:35:01,069 INFO [main] com.netflix.discovery.DiscoveryClient : Initializing Eureka in region us-east-1
2018-01-18 08:35:12,136 INFO [main] com.netflix.discovery.provider.DiscoveryJerseyProvider : Using JSON encoding codec LegacyJacksonJson
2018-01-18 08:35:12,137 INFO [main] com.netflix.discovery.provider.DiscoveryJerseyProvider : Using JSON decoding codec LegacyJacksonJson
2018-01-18 08:35:13,882 INFO [main] com.netflix.discovery.provider.DiscoveryJerseyProvider : Using XML encoding codec XStreamXml
2018-01-18 08:35:13,882 INFO [main] com.netflix.discovery.provider.DiscoveryJerseyProvider : Using XML decoding codec XStreamXml
2018-01-18 08:35:16,570 INFO [main] com.netflix.discovery.shared.resolver.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration
2018-01-18 08:35:17,044 INFO [main] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2018-01-18 08:35:17,044 INFO [main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2018-01-18 08:35:17,044 INFO [main] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2018-01-18 08:35:17,044 INFO [main] com.netflix.discovery.DiscoveryClient : Application is null : false
2018-01-18 08:35:17,045 INFO [main] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2018-01-18 08:35:17,045 INFO [main] com.netflix.discovery.DiscoveryClient : Application version is -1: true
2018-01-18 08:35:17,045 INFO [main] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2018-01-18 08:35:19,108 INFO [main] com.netflix.discovery.DiscoveryClient : The response status is 200
2018-01-18 08:35:19,109 INFO [main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2018-01-18 08:35:19,151 INFO [main] com.netflix.discovery.InstanceInfoReplicator : InstanceInfoReplicator onDemand update allowed rate per min is 4
2018-01-18 08:35:19,221 INFO [main] com.netflix.discovery.DiscoveryClient : Discovery Client initialized at timestamp 1516264519221 with initial instances count: 2
2018-01-18 08:35:21,590 INFO [main] org.springframework.cloud.netflix.eureka.serviceregistry.EurekaServiceRegistry : Registering application user-service with eureka with status UP
2018-01-18 08:35:21,592 INFO [main] com.netflix.discovery.DiscoveryClient : Saw local status change event StatusChangeEvent [timestamp=1516264521592, current=UP, previous=STARTING]
2018-01-18 08:35:21,597 INFO [DiscoveryClient-InstanceInfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_USER-SERVICE/fad9b0071a33:user-service:8080: registering service...
2018-01-18 08:35:22,235 INFO [DiscoveryClient-InstanceInfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_USER-SERVICE/fad9b0071a33:user-service:8080 - registration status: 204
2018-01-18 08:35:56,660 INFO [main] org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2018-01-18 08:35:56,669 INFO [main] org.springframework.cloud.netflix.eureka.serviceregistry.EurekaAutoServiceRegistration : Updating port to 8080
2018-01-18 08:35:56,747 INFO [main] com.test.userservice.UserServiceApplication : Started UserServiceApplication in 805.764 seconds (JVM running for 807.935)

Flume does not write to HDFS from kafka topic

I am trying to read from Kafka topic and store it to HDFS as Flume sink and input data is JSON, following is my config file,
# components name
a1.sources = source1
a1.channels = channel1
a1.sinks = sink1
a1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.source1.zookeeperConnect = server1:port,server2:port,server3:port
a1.sources.source1.kafka.bootstrap.servers = server1:port,server2:port,server3:port
a1.sources.source1.kafka.topics = TOPIC
a1.sources.source1.kafka.consumer.group.id = flume
a1.sources.source1.channels = channel1
a1.sources.source1.interceptors = i1
a1.sources.source1.interceptors.i1.type = timestamp
a1.sources.source1.kafka.consumer.timeout.ms = 100
a1.channels.channel1.type = memory
a1.channels.channel1.capacity = 1
a1.channels.channel1.transactionCapacity = 1
a1.sinks.sink1.type = hdfs
a1.sinks.sink1.hdfs.path = /path/kafka/
a1.sinks.sink1.hdfs.rollInterval = 1
a1.sinks.sink1.hdfs.rollSize = 1
a1.sinks.sink1.hdfs.rollCount = 1
a1.sinks.sink1.hdfs.fileType = DataStream
a1.sinks.sink1.channel = channel1
a1.sinks.sink1.hdfs.idleTimeout = 10
and here is Flume-NG command
flume-ng agent --conf-file /path/kafka_hdfs_sink.conf --conf /path/flume/conf/ --name a1;
Following are logs after loading all dependencies,
18/01/30 12:15:24 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
18/01/30 12:15:24 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/home/fk010191/kafka/kafka_hdfs_sink.conf
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Added sinks: sink1 Agent: a1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
18/01/30 12:15:24 INFO node.AbstractConfigurationProvider: Creating channels
18/01/30 12:15:24 INFO channel.DefaultChannelFactory: Creating instance of channel channel1 type memory
18/01/30 12:15:24 INFO node.AbstractConfigurationProvider: Created channel channel1
18/01/30 12:15:24 INFO source.DefaultSourceFactory: Creating instance of source source1, type org.apache.flume.source.kafka.KafkaSource
18/01/30 12:15:24 ERROR node.AbstractConfigurationProvider: Source source1 has been removed due to an error during configuration
org.apache.flume.conf.ConfigurationException: Kafka topic must be specified.
at org.apache.flume.source.kafka.KafkaSource.doConfigure(KafkaSource.java:183)
at org.apache.flume.source.BasicSourceSemantics.configure(BasicSourceSemantics.java:65)
at org.apache.flume.source.AbstractPollableSource.configure(AbstractPollableSource.java:63)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
18/01/30 12:15:24 INFO sink.DefaultSinkFactory: Creating instance of sink: sink1, type: hdfs
18/01/30 12:15:24 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
18/01/30 12:15:24 INFO node.AbstractConfigurationProvider: Channel channel1 connected to [sink1]
18/01/30 12:15:24 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#4c601ec4 counterGroup:{ name:null counters:{} } }} channels:{channel1=org.apache.flume.channel.MemoryChannel{name: channel1}} }
18/01/30 12:15:24 INFO node.Application: Starting Channel channel1
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: channel1: Successfully registered new MBean.
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: channel1 started
18/01/30 12:15:24 INFO node.Application: Starting Sink sink1
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: sink1: Successfully registered new MBean.
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: sink1 started
when i stop after a while, i see as below but it doesnot write anything to HDFS
^C18/01/30 12:21:16 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 23
18/01/30 12:21:16 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider stopping
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: sink1 stopped
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.start.time == 1517336124877
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.stop.time == 1517336476542
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.batch.complete == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.batch.empty == 46
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.batch.underflow == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.connection.closed.count == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.connection.creation.count == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.connection.failed.count == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.event.drain.attempt == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.event.drain.sucess == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: channel1 stopped
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.start.time == 1517336124874
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.stop.time == 1517336476543
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.capacity == 1
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.current.size == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.attempt == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.success == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.attempt == 46
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.success == 0
when i run following command - bin/kafka-console-consumer.sh --bootstrap-server server1:port,server2:port,server3:port --topic TOPIC --from-beginning, i can see some data from the topic...but Flume is not writing anything to HDFS. Any help is greatly appreciated.

Spark Streaming is not streaming,but waits showing consumer-config values

I am trying to stream data using spark-streaming-kafka-0-10_2.11 artifact and 2.1.1 version along with spark-streaming_2.11 version 2.1.1 and kafka_2.11(0.10.2.1 version).
When I start the program, Spark is not streaming data and neither it is throwing any error.
Below is my code, dependencies, and response.
Dependencies:
"org.apache.kafka" % "kafka_2.11" % "0.10.2.1",
"org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.1",
"org.apache.spark" % "spark-streaming_2.11" % "2.1.1"
Code:
val sparkConf = new SparkConf().setMaster("local[*]").setAppName("kafka-dstream").set("spark.driver.allowMultipleContexts", "true")
val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(5))
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "ec2-xx-yy-zz-abc.us-west-2.compute.amazonaws.com:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> ("java-test-consumer-" + random.nextInt()+"-" +
System.currentTimeMillis()),
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
stream.foreachRDD(f=>f.foreachPartition(f=>f ->{
while (f.hasNext) {
System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>"+f.next().key());
}
}))
ssc.start()
ssc.awaitTermination()
Response (Logs):
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Loading project definition from D:\Scala_Workspace\LM\project
[info] Set current project to LM (in build file:/D:/Scala_Workspace/LM/)
[info] Running consumer.One
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/06/09 12:43:44 INFO SparkContext: Running Spark version 2.1.1
17/06/09 12:43:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/09 12:43:44 INFO SecurityManager: Changing view acls to: ee208961
17/06/09 12:43:44 INFO SecurityManager: Changing modify acls to: ee208961
17/06/09 12:43:44 INFO SecurityManager: Changing view acls groups to:
17/06/09 12:43:44 INFO SecurityManager: Changing modify acls groups to:
17/06/09 12:43:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ee208961); groups wi
permissions: Set(ee208961); groups with modify permissions: Set()
17/06/09 12:43:45 INFO Utils: Successfully started service 'sparkDriver' on port 51695.
17/06/09 12:43:45 INFO SparkEnv: Registering MapOutputTracker
17/06/09 12:43:45 INFO SparkEnv: Registering BlockManagerMaster
17/06/09 12:43:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/09 12:43:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/09 12:43:45 INFO DiskBlockManager: Created local directory at C:\Users\EE208961\AppData\Local\Temp\blockmgr-7f3c43f0-d1f5-43fb-b185-a5bf78de4496
17/06/09 12:43:45 INFO MemoryStore: MemoryStore started with capacity 93.3 MB
17/06/09 12:43:45 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/09 12:43:45 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/09 12:43:46 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.1.22.70:4040
17/06/09 12:43:46 INFO Executor: Starting executor ID driver on host localhost
17/06/09 12:43:46 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51706.
17/06/09 12:43:46 INFO NettyBlockTransferService: Server created on 10.1.22.70:51706
17/06/09 12:43:46 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/09 12:43:46 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.22.70:51706 with 93.3 MB RAM, BlockManagerId(driver, 10.1.22.70, 51706,
17/06/09 12:43:46 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 WARN KafkaUtils: overriding enable.auto.commit to false for executor
17/06/09 12:43:46 WARN KafkaUtils: overriding auto.offset.reset to none for executor
17/06/09 12:43:46 WARN KafkaUtils: overriding executor group.id to spark-executor-sadad
17/06/09 12:43:46 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Slide time = 5000 ms
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Storage level = Serialized 1x Replicated
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Checkpoint interval = null
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Remember interval = 5000 ms
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka010.DirectKafkaInputDStream#5cda65c2
17/06/09 12:43:47 INFO ForEachDStream: Slide time = 5000 ms
17/06/09 12:43:47 INFO ForEachDStream: Storage level = Serialized 1x Replicated
17/06/09 12:43:47 INFO ForEachDStream: Checkpoint interval = null
17/06/09 12:43:47 INFO ForEachDStream: Remember interval = 5000 ms
17/06/09 12:43:47 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream#1b9c49ce
17/06/09 12:43:47 INFO ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [ec2-54-69-23-196.us-west-2.compute.amazonaws.com:9092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = sadad
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
17/06/09 12:43:47 INFO AppInfoParser: Kafka version : 0.10.2.1
17/06/09 12:43:47 INFO AppInfoParser: Kafka commitId : a7a17cdec9eaa6c5
The code starts waiting here,neither it throws error,nor streams.
And what else i observed is,if i don't use "org.apache.kafka" % "kafka_2.11" % "0.10.2.1" dependency, am getting
"Exception in thread "main" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 291941, only 32 bytes available" error,
so i used kafka_2.11(0.10.2.1 version) dependency.

Spring boot taking long time to start

I have a spring boot application where it listens to a RabbitMQ queue.
The problem is when i run my application it hangs at particular step at
hibernate and it takes around 10 minutes to further continue.
Below is where it hangs
INFO [] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
INFO [] org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 3338 ms
INFO [] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'metricsExecutor'
INFO [] org.springframework.boot.context.embedded.ServletRegistrationBean - Mapping servlet: 'dispatcherServlet' to [/]
INFO [] org.springframework.boot.context.embedded.FilterRegistrationBean - Mapping filter: 'metricFilter' to: [/*]
INFO [] org.springframework.boot.context.embedded.FilterRegistrationBean - Mapping filter: 'characterEncodingFilter' to: [/*]
INFO [] org.springframework.boot.context.embedded.FilterRegistrationBean - Mapping filter: 'webRequestLoggingFilter' to: [/*]
INFO [] org.springframework.boot.context.embedded.FilterRegistrationBean - Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
INFO [] org.springframework.boot.context.embedded.FilterRegistrationBean - Mapping filter: 'applicationContextIdFilter' to: [/*]
[main] INFO [] org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Building JPA container EntityManagerFactory for persistence unit 'default'
[main] INFO [] org.hibernate.jpa.internal.util.LogHelper - HHH000204: Processing PersistenceUnitInfo [
name: default
...]
[main] INFO [] org.hibernate.Version - HHH000412: Hibernate Core {4.3.7.Final}
[main] INFO [] org.hibernate.cfg.Environment - HHH000205: Loaded properties from resource hibernate.properties: {hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect, hibernate.show_sql=true, hibernate.bytecode.use_reflection_optimizer=false, hibernate.format_sql=true, hibernate.ejb.naming_strategy=org.hibernate.cfg.DefaultNamingStrategy}
[main] INFO [] org.hibernate.cfg.Environment - HHH000021: Bytecode provider name : javassist
[main] INFO [] org.hibernate.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {4.0.5.Final}
Below is the timing info
2015-07-08 09:31:16,714 [main] INFO [] org.hibernate.Version - HHH000412: Hibernate Core {4.3.7.Final}
2015-07-08 09:31:16,717 [main] INFO [] org.hibernate.cfg.Environment - HHH000205: Loaded properties from resource hibernate.properties: {hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect, hibernate.show_sql=true, hibernate.bytecode.use_reflection_optimizer=false, hibernate.format_sql=true, hibernate.ejb.naming_strategy=org.hibernate.cfg.DefaultNamingStrategy}
2015-07-08 09:31:16,717 [main] INFO [] org.hibernate.cfg.Environment - HHH000021: Bytecode provider name : javassist
2015-07-08 09:31:16,895 [main] INFO [] org.hibernate.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {4.0.5.Final}
In the above line it hangs for 8 min, and then it generates the below log waiting for any message.
15-07-14 15:36:22,917 [main] INFO [] com.test.myApp.reporting.service.Application
- Starting Application on hyd-rupakular-m.local with PID 654 (/Users/myUser/code/myRepo/target/classes started by rupakulr in /Users/myuser/myRepo/xyzabc)
2015-07-14 15:36:22,966 [main] INFO [] org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#5fa0d903: startup date [Tue Jul 14 15:36:22 IST 2015]; root of context hierarchy
2015-07-14 15:36:24,023 [main] INFO [] org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [spring-integration-context.xml]
2015-07-14 15:36:24,332 [main] INFO [] org.springframework.beans.factory.config.PropertiesFactoryBean - Loading properties file from URL [jar:file:/Users/rupakulr/.m2/repository/org/springframework/integration/spring-integration-core/4.1.2.RELEASE/spring-integration-core-4.1.2.RELEASE.jar!/META-INF/spring.integration.default.properties]
2015-07-14 15:45:09,646 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/env],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()
2015-07-14 15:45:09,646 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/metrics/{name:.*}],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.MetricsMvcEndpoint.value(java.lang.String)
2015-07-14 15:45:09,646 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/metrics],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()
2015-07-14 15:45:09,647 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/mappings],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()
2015-07-14 15:45:09,647 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/trace],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()
2015-07-14 15:45:09,647 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/shutdown],methods=[POST],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.ShutdownMvcEndpoint.invoke()
2015-07-14 15:45:09,647 [main] INFO [] org.springframework.boot.actuate.endpoint.mvc.EndpointHandlerMapping - Mapped "{[/manage/beans],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()
2015-07-14 15:45:09,700 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Registering beans for JMX exposure on startup
2015-07-14 15:45:09,708 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Bean with name 'org.springframework.integration.channel.interceptor.WireTap#0' has been autodetected for JMX exposure
2015-07-14 15:45:09,708 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Bean with name 'org.springframework.integration.channel.interceptor.WireTap#1' has been autodetected for JMX exposure
2015-07-14 15:45:09,709 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Bean with name 'org.springframework.integration.config.RouterFactoryBean#0' has been autodetected for JMX exposure
2015-07-14 15:45:09,712 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located managed bean 'org.springframework.integration.channel.interceptor.WireTap#0': registering with JMX server as MBean [org.springframework.integration.channel.interceptor:name=org.springframework.integration.channel.interceptor.WireTap#0,type=WireTap]
2015-07-14 15:45:09,726 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located managed bean 'org.springframework.integration.channel.interceptor.WireTap#1': registering with JMX server as MBean [org.springframework.integration.channel.interceptor:name=org.springframework.integration.channel.interceptor.WireTap#1,type=WireTap]
2015-07-14 15:45:09,730 [main] INFO [] org.springframework.jmx.export.annotation.AnnotationMBeanExporter - Located managed bean 'org.springframework.integration.config.RouterFactoryBean#0': registering with JMX server as MBean [org.springframework.integration.router:name=org.springframework.integration.config.RouterFactoryBean#0,type=HeaderValueRouter]
2015-07-14 15:45:09,745 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Registering beans for JMX exposure on startup
2015-07-14 15:45:09,749 [main] INFO [] org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase -2147483648
2015-07-14 15:45:09,750 [main] INFO [] org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase 0
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {router} as a subscriber to the 'reporting-dealer-compliance-dealer-list-channel' channel
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.reporting-dealer-compliance-dealer-list-channel' has 1 subscriber(s).
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {object-to-json-transformer} as a subscriber to the 'reporting-dealer-compliance-dealer-compliance-json-channel' channel
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.reporting-dealer-compliance-dealer-compliance-json-channel' has 1 subscriber(s).
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started org.springframework.integration.config.ConsumerEndpointFactoryBean#1
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {logging-channel-adapter:logging-channel.adapter} as a subscriber to the 'logging-channel' channel
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.logging-channel' has 1 subscriber(s).
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started logging-channel.adapter
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {stream:outbound-channel-adapter(character):std-out-channel.adapter} as a subscriber to the 'std-out-channel' channel
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.std-out-channel' has 1 subscriber(s).
2015-07-14 15:45:09,751 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started std-out-channel.adapter
2015-07-14 15:45:10,968 [main] INFO [] org.springframework.integration.amqp.inbound.AmqpInboundGateway - started org.springframework.integration.amqp.inbound.AmqpInboundGateway#0
2015-07-14 15:45:10,968 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {amqp:outbound-channel-adapter:invalidMessageChannelAdapter} as a subscriber to the 'invalid-message-channel' channel
2015-07-14 15:45:10,968 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.invalid-message-channel' has 2 subscriber(s).
2015-07-14 15:45:10,968 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started invalidMessageChannelAdapter
2015-07-14 15:45:10,968 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {service-activator} as a subscriber to the 'failed-channel' channel
2015-07-14 15:45:10,968 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.failed-channel' has 1 subscriber(s).
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started org.springframework.integration.config.ConsumerEndpointFactoryBean#2
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {amqp:outbound-channel-adapter:failedMessageChannelAdapter} as a subscriber to the 'failed-channel' channel
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.failed-channel' has 2 subscriber(s).
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started failedMessageChannelAdapter
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.handler.MessageHandlerChain - started org.springframework.integration.handler.MessageHandlerChain#0
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {chain} as a subscriber to the 'reporting-dealer-compliance-inbound-channel' channel
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.reporting-dealer-compliance-inbound-channel' has 1 subscriber(s).
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started org.springframework.integration.config.ConsumerEndpointFactoryBean#3
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.handler.MessageHandlerChain - started org.springframework.integration.handler.MessageHandlerChain#1
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {chain} as a subscriber to the 'prepare-csv' channel
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.channel.DirectChannel - Channel 'application:8204.prepare-csv' has 1 subscriber(s).
2015-07-14 15:45:10,969 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started org.springframework.integration.config.ConsumerEndpointFactoryBean#4
2015-07-14 15:45:10,971 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'requestMappingEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=requestMappingEndpoint]
2015-07-14 15:45:10,979 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'environmentEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=environmentEndpoint]
2015-07-14 15:45:10,986 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'healthEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=healthEndpoint]
2015-07-14 15:45:10,992 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'beansEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=beansEndpoint]
2015-07-14 15:45:10,997 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'infoEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=infoEndpoint]
2015-07-14 15:45:11,003 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'metricsEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=metricsEndpoint]
2015-07-14 15:45:11,009 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'traceEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=traceEndpoint]
2015-07-14 15:45:11,014 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'dumpEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=dumpEndpoint]
2015-07-14 15:45:11,020 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'autoConfigurationAuditEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=autoConfigurationAuditEndpoint]
2015-07-14 15:45:11,026 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'shutdownEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=shutdownEndpoint]
2015-07-14 15:45:11,032 [main] INFO [] org.springframework.boot.actuate.endpoint.jmx.EndpointMBeanExporter - Located managed bean 'configurationPropertiesReportEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=configurationPropertiesReportEndpoint]
2015-07-14 15:45:11,037 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-07-14 15:45:11,037 [main] INFO [] org.springframework.integration.channel.PublishSubscribeChannel - Channel 'application:8204.errorChannel' has 1 subscriber(s).
2015-07-14 15:45:11,037 [main] INFO [] org.springframework.integration.endpoint.EventDrivenConsumer - started _org.springframework.integration.errorLogger
2015-07-14 15:45:11,037 [main] INFO [] org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase 2147483647
2015-07-14 15:45:11,089 [main] INFO [] org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8204"]
2015-07-14 15:45:11,095 [main] INFO [] org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8204"]
2015-07-14 15:45:11,100 [main] INFO [] org.apache.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
2015-07-14 15:45:11,112 [main] INFO [] org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer - Tomcat started on port(s): 8204 (http)
2015-07-14 15:45:11,115 [main] INFO [] com.test.myApp.reporting.service.Application - Started Application in 528.444 seconds (JVM running for 529.101)
we are experiencing a lot of problem when developing our app, every time we make some changes we have to wait 8 min to test our changes.
In case you are running your application on Linux, try to add this property to the java commandline:
-Djava.security.egd=file:/dev/./urandom
E.g.:
/usr/bin/java -Xmx75m -Djava.security.egd=file:/dev/./urandom -jar app.jar
I had a similar problem, although it did not take 8 minutes but only around 3 to start the application, and this solved it for me.
I think I faced with the similar issue. In my case time to start was proportional to the database size. And the entire problem was that hibernate (we use hibernate-core-5.0.2-Final) loads full metadata from DB just for obtaining Dialect value.
We now use following property to disable:
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults=false
But make sure you specify 'spring.jpa.database-platform' value correct.
Here is related part of JdbcEnvironmentInitiator.java file:
#Override
public JdbcEnvironment initiateService(Map configurationValues, ServiceRegistryImplementor registry) {
final DialectFactory dialectFactory = registry.getService( DialectFactory.class );
// 'hibernate.temp.use_jdbc_metadata_defaults' is a temporary magic value.
// The need for it is intended to be alleviated with future development, thus it is
// not defined as an Environment constant...
//
// it is used to control whether we should consult the JDBC metadata to determine
// certain Settings default values; it is useful to *not* do this when the database
// may not be available (mainly in tools usage).
boolean useJdbcMetadata = ConfigurationHelper.getBoolean(
"hibernate.temp.use_jdbc_metadata_defaults",
configurationValues,
true
);
if ( useJdbcMetadata ) {
final JdbcConnectionAccess jdbcConnectionAccess = buildJdbcConnectionAccess( configurationValues, registry );
try {
final Connection connection = jdbcConnectionAccess.obtainConnection();
try {
final DatabaseMetaData dbmd = connection.getMetaData();
if ( log.isDebugEnabled() ) {
log.debugf(
"Database ->\n"
+ " name : %s\n"
+ " version : %s\n"
+ " major : %s\n"
+ " minor : %s",
dbmd.getDatabaseProductName(),
dbmd.getDatabaseProductVersion(),
dbmd.getDatabaseMajorVersion(),
dbmd.getDatabaseMinorVersion()
);
log.debugf(
"Driver ->\n"
+ " name : %s\n"
+ " version : %s\n"
+ " major : %s\n"
+ " minor : %s",
dbmd.getDriverName(),
dbmd.getDriverVersion(),
dbmd.getDriverMajorVersion(),
dbmd.getDriverMinorVersion()
);
log.debugf( "JDBC version : %s.%s", dbmd.getJDBCMajorVersion(), dbmd.getJDBCMinorVersion() );
}
Dialect dialect = dialectFactory.buildDialect(
configurationValues,
new DialectResolutionInfoSource() {
#Override
public DialectResolutionInfo getDialectResolutionInfo() {
try {
return new DatabaseMetaDataDialectResolutionInfoAdapter( connection.getMetaData() );
}
catch ( SQLException sqlException ) {
throw new HibernateException(
"Unable to access java.sql.DatabaseMetaData to determine appropriate Dialect to use",
sqlException
);
}
}
}
);
return new JdbcEnvironmentImpl(
registry,
dialect,
dbmd
);
}
catch (SQLException e) {
log.unableToObtainConnectionMetadata( e.getMessage() );
}
finally {
try {
jdbcConnectionAccess.releaseConnection( connection );
}
catch (SQLException ignore) {
}
}
}
catch (Exception e) {
log.unableToObtainConnectionToQueryMetadata( e.getMessage() );
}
}
// if we get here, either we were asked to not use JDBC metadata or accessing the JDBC metadata failed.
return new JdbcEnvironmentImpl( registry, dialectFactory.buildDialect( configurationValues, null ) );
}

Resources