Getting "No available slots for topology" error for storm nimbus - apache-storm

I am new to the apache-storm. I am trying to set up a local storm cluster. I have setup zookeeper using the following link and when I start zookeeper it's running fine.But when I start nimbus using start nimbus command I am seeing an error No slot available for topology in the nimbus.log file.
My nimbus.log file:
SendThread(kubernetes.docker.internal:2181) [INFO] Opening socket connection to server kubernetes.docker.internal/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-05-25 14:51:37.260 o.a.s.z.ClientZookeeper main [INFO] Starting ZK Curator
2020-05-25 14:51:37.260 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl main [INFO] Starting
2020-05-25 14:51:37.261 o.a.s.s.o.a.z.ZooKeeper main [INFO] Initiating client connection, connectString=127.0.0.1:2181/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#35beb15e
2020-05-25 14:51:37.261 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Socket connection established to kubernetes.docker.internal/127.0.0.1:2181, initiating session
2020-05-25 14:51:37.263 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl main [INFO] Default schema
2020-05-25 14:51:37.264 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Opening socket connection to server kubernetes.docker.internal/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-05-25 14:51:37.265 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Session establishment complete on server kubernetes.docker.internal/127.0.0.1:2181, sessionid = 0x1000ebc40020006, negotiated timeout = 20000
2020-05-25 14:51:37.266 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Socket connection established to kubernetes.docker.internal/127.0.0.1:2181, initiating session
2020-05-25 14:51:37.266 o.a.s.s.o.a.c.f.s.ConnectionStateManager main-EventThread [INFO] State change: CONNECTED
2020-05-25 14:51:37.270 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Session establishment complete on server kubernetes.docker.internal/127.0.0.1:2181, sessionid = 0x1000ebc40020007, negotiated timeout = 20000
2020-05-25 14:51:37.271 o.a.s.s.o.a.c.f.s.ConnectionStateManager main-EventThread [INFO] State change: CONNECTED
2020-05-25 14:51:41.791 o.a.s.n.NimbusInfo main [INFO] Nimbus figures out its name to 7480-GQY29H2.smarshcorp.com
2020-05-25 14:51:41.817 o.a.s.d.n.Nimbus main [INFO] Starting Nimbus with conf {storm.messaging.netty.min_wait_ms=100, topology.backpressure.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.resource.isolation.plugin=org.apache.storm.container.cgroup.CgroupManager, storm.zookeeper.auth.user=null, storm.messaging.netty.buffer_size=5242880, storm.exhibitor.port=8080, topology.bolt.wait.progressive.level1.count=1, pacemaker.auth.method=NONE, ui.filter=null, worker.profiler.enabled=false, executor.metrics.frequency.secs=60, supervisor.thrift.threads=16, ui.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, supervisor.supervisors.commands=[], supervisor.queue.size=128, logviewer.cleanup.age.mins=10080, topology.tuple.serializer=org.apache.storm.serialization.types.ListDelegateSerializer, storm.cgroup.memory.enforcement.enable=false, drpc.port=3772, topology.max.spout.pending=null, topology.transfer.buffer.size=1000, nimbus.worker.heartbeats.recovery.strategy.class=org.apache.storm.nimbus.TimeOutWorkerHeartbeatsRecoveryStrategy, worker.metrics={CGroupMemory=org.apache.storm.metric.cgroup.CGroupMemoryUsage, CGroupMemoryLimit=org.apache.storm.metric.cgroup.CGroupMemoryLimit, CGroupCpu=org.apache.storm.metric.cgroup.CGroupCpu, CGroupCpuGuarantee=org.apache.storm.metric.cgroup.CGroupCpuGuarantee}, logviewer.port=8000, worker.childopts=-Xmx%HEAP-MEM%m -XX:+PrintGCDetails -Xloggc:artifacts/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=artifacts/heapdump, topology.component.cpu.pcore.percent=10.0, storm.daemon.metrics.reporter.plugins=[org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter], blacklist.scheduler.resume.time.secs=1800, drpc.childopts=-Xmx768m, nimbus.task.launch.secs=120, logviewer.childopts=-Xmx128m, storm.supervisor.hard.memory.limit.overage.mb=2024, storm.zookeeper.servers=[127.0.0.1], storm.messaging.transport=org.apache.storm.messaging.netty.Context, storm.messaging.netty.authentication=false, topology.localityaware.higher.bound=0.8, storm.cgroup.memory.limit.tolerance.margin.mb=0.0, storm.cgroup.hierarchy.name=storm, storm.metricprocessor.class=org.apache.storm.metricstore.NimbusMetricProcessor, topology.kryo.factory=org.apache.storm.serialization.DefaultKryoFactory, nimbus.assignments.service.threads=10, worker.heap.memory.mb=768, storm.network.topography.plugin=org.apache.storm.networktopography.DefaultRackDNSToSwitchMapping, supervisor.slots.ports=[6700, 6701, 6702, 6703], topology.stats.sample.rate=0.05, storm.local.dir=/Users/anshita.singh/storm/datadir/storm, topology.backpressure.wait.park.microsec=100, topology.ras.constraint.max.state.search=10000, topology.testing.always.try.serialize=false, nimbus.assignments.service.thread.queue.size=100, storm.principal.tolocal=org.apache.storm.security.auth.DefaultPrincipalToLocal, java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/lib64, nimbus.local.assignments.backend.class=org.apache.storm.assignments.InMemoryAssignmentBackend, worker.gc.childopts=, storm.group.mapping.service.cache.duration.secs=120, topology.multilang.serializer=org.apache.storm.multilang.JsonSerializer, drpc.request.timeout.secs=600, nimbus.blobstore.class=org.apache.storm.blobstore.LocalFsBlobStore, topology.state.synchronization.timeout.secs=60, topology.bolt.wait.progressive.level2.count=1000, topology.worker.shared.thread.pool.size=4, topology.executor.receive.buffer.size=32768, pacemaker.servers=[], supervisor.monitor.frequency.secs=3, storm.nimbus.retry.times=5, topology.transfer.batch.size=1, transactional.zookeeper.port=null, storm.auth.simple-white-list.users=[], topology.scheduler.strategy=org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy, storm.zookeeper.port=2181, storm.zookeeper.retry.intervalceiling.millis=30000, storm.cluster.state.store=org.apache.storm.cluster.ZKStateStorageFactory, nimbus.thrift.port=6627, blacklist.scheduler.tolerance.count=3, nimbus.thrift.threads=64, supervisor.supervisors=[], nimbus.seeds=[localhost], supervisor.slot.ports=-6700 -6701 -6702 -6703, storm.cluster.metrics.consumer.publish.interval.secs=60, logviewer.filter.params=null, topology.min.replication.count=1, nimbus.blobstore.expiration.secs=600, storm.group.mapping.service=org.apache.storm.security.auth.ShellBasedGroupsMapping, storm.nimbus.retry.interval.millis=2000, topology.max.task.parallelism=null, topology.backpressure.wait.progressive.level2.count=1000, drpc.https.keystore.password=*****, resource.aware.scheduler.constraint.max.state.search=100000, supervisor.heartbeat.frequency.secs=5, nimbus.credential.renewers.freq.secs=600, storm.supervisor.medium.memory.grace.period.ms=30000, storm.thrift.transport=org.apache.storm.security.auth.SimpleTransportPlugin, storm.cgroup.hierarchy.dir=/cgroup/storm_resources, storm.zookeeper.auth.password=null, ui.port=8081, drpc.authorizer.acl.strict=false, topology.message.timeout.secs=30, topology.error.throttle.interval.secs=10, topology.backpressure.check.millis=50, drpc.https.keystore.type=JKS, supervisor.memory.capacity.mb=4096.0, storm.metricstore.class=org.apache.storm.metricstore.rocksdb.RocksDbStore, drpc.authorizer.acl.filename=drpc-auth-acl.yaml, topology.builtin.metrics.bucket.size.secs=60, topology.spout.wait.park.microsec=100, storm.local.mode.zmq=false, pacemaker.client.max.threads=2, ui.header.buffer.bytes=4096, topology.shellbolt.max.pending=100, topology.serialized.message.size.metrics=false, drpc.max_buffer_size=1048576, drpc.disable.http.binding=true, storm.codedistributor.class=org.apache.storm.codedistributor.LocalFileSystemCodeDistributor, worker.profiler.childopts=-XX:+UnlockCommercialFeatures -XX:+FlightRecorder, nimbus.supervisor.timeout.secs=60, storm.supervisor.cgroup.rootdir=storm, topology.worker.max.heap.size.mb=768.0, storm.zookeeper.root=/storm, topology.disable.loadaware.messaging=false, storm.supervisor.hard.memory.limit.multiplier=2.0, nimbus.topology.validator=org.apache.storm.nimbus.DefaultTopologyValidator, worker.heartbeat.frequency.secs=1, storm.messaging.netty.max_wait_ms=1000, topology.backpressure.wait.progressive.level1.count=1, topology.max.error.report.per.interval=5, nimbus.thrift.max_buffer_size=1048576, storm.metricstore.rocksdb.location=storm_rocks, storm.supervisor.low.memory.threshold.mb=1024, pacemaker.max.threads=50, ui.pagination=20, ui.disable.http.binding=true, supervisor.blobstore.download.max_retries=3, topology.enable.message.timeouts=true, logviewer.disable.http.binding=true, storm.messaging.netty.transfer.batch.size=262144, topology.spout.wait.progressive.level2.count=0, blacklist.scheduler.strategy=org.apache.storm.scheduler.blacklist.strategies.DefaultBlacklistStrategy, storm.metricstore.rocksdb.retention_hours=240, supervisor.run.worker.as.user=false, storm.messaging.netty.client_worker_threads=1, topology.tasks=null, supervisor.thrift.socket.timeout.ms=5000, storm.group.mapping.service.params=null, drpc.http.port=3774, transactional.zookeeper.root=/transactional, supervisor.blobstore.download.thread.count=5, logviewer.filter=null, pacemaker.kerberos.users=[], topology.spout.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.blobstore.inputstream.buffer.size.bytes=65536, supervisor.worker.heartbeats.max.timeout.secs=600, supervisor.worker.timeout.secs=30, topology.worker.receiver.thread.count=1, logviewer.max.sum.worker.logs.size.mb=4096, topology.executor.overflow.limit=0, topology.batch.flush.interval.millis=1, nimbus.file.copy.expiration.secs=600, pacemaker.port=6699, topology.worker.logwriter.childopts=-Xmx64m, drpc.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, nimbus.topology.blobstore.deletion.delay.ms=300000, storm.blobstore.acl.validation.enabled=false, ui.filter.params=null, topology.workers=1, blacklist.scheduler.tolerance.time.secs=300, storm.supervisor.medium.memory.threshold.mb=1536, topology.environment=null, drpc.invocations.port=3773, storm.metricstore.rocksdb.create_if_missing=true, nimbus.cleanup.inbox.freq.secs=600, client.blobstore.class=org.apache.storm.blobstore.NimbusBlobStore, topology.fall.back.on.java.serialization=true, storm.nimbus.retry.intervalceiling.millis=60000, storm.nimbus.zookeeper.acls.fixup=true, logviewer.appender.name=A1, ui.users=null, pacemaker.childopts=-Xmx1024m, storm.messaging.netty.server_worker_threads=1, scheduler.display.resource=false, ui.actions.enabled=true, storm.thrift.socket.timeout.ms=600000, storm.topology.classpath.beginning.enabled=false, storm.zookeeper.connection.timeout=15000, topology.tick.tuple.freq.secs=null, nimbus.inbox.jar.expiration.secs=3600, topology.debug=false, storm.zookeeper.retry.interval=1000, storm.messaging.netty.buffer.high.watermark=16777216, storm.blobstore.dependency.jar.upload.chunk.size.bytes=1048576, worker.log.level.reset.poll.secs=30, storm.exhibitor.poll.uripath=/exhibitor/v1/cluster/list, storm.zookeeper.retry.times=5, nimbus.code.sync.freq.secs=120, topology.component.resources.offheap.memory.mb=0.0, topology.spout.wait.progressive.level1.count=0, topology.state.checkpoint.interval.ms=1000, topology.priority=29, supervisor.localizer.cleanup.interval.ms=30000, nimbus.host=127.0.0.1, storm.health.check.dir=healthchecks, supervisor.cpu.capacity=400.0, topology.backpressure.wait.progressive.level3.sleep.millis=1, storm.cgroup.resources=[cpu, memory], storm.worker.min.cpu.pcore.percent=0.0, topology.classpath=null, storm.nimbus.zookeeper.acls.check=true, num.stat.buckets=20, topology.spout.wait.progressive.level3.sleep.millis=1, supervisor.localizer.cache.target.size.mb=10240, topology.worker.childopts=null, drpc.https.port=-1, topology.bolt.wait.park.microsec=100, topology.max.replication.wait.time.sec=60, storm.cgroup.cgexec.cmd=/bin/cgexec, topology.acker.executors=null, topology.bolt.wait.progressive.level3.sleep.millis=1, supervisor.worker.start.timeout.secs=120, supervisor.worker.shutdown.sleep.secs=3, logviewer.max.per.worker.logs.size.mb=2048, topology.trident.batch.emit.interval.millis=500, task.heartbeat.frequency.secs=3, supervisor.enable=true, supervisor.thrift.max_buffer_size=1048576, supervisor.blobstore.class=org.apache.storm.blobstore.NimbusBlobStore, topology.producer.batch.size=1, drpc.worker.threads=64, resource.aware.scheduler.priority.strategy=org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy, blacklist.scheduler.reporter=org.apache.storm.scheduler.blacklist.reporters.LogReporter, storm.messaging.netty.socket.backlog=500, storm.cgroup.inherit.cpuset.configs=false, nimbus.queue.size=100000, drpc.queue.size=128, ui.disable.spout.lag.monitoring=true, topology.eventlogger.executors=0, pacemaker.base.threads=10, nimbus.childopts=-Xmx1024m, topology.spout.recvq.skips=3, storm.resource.isolation.plugin.enable=false, nimbus.monitor.freq.secs=10, storm.supervisor.memory.limit.tolerance.margin.mb=128.0, storm.disable.symlinks=false, topology.localityaware.lower.bound=0.2, transactional.zookeeper.servers=null, nimbus.task.timeout.secs=30, logs.users=null, pacemaker.thrift.message.size.max=10485760, ui.host=0.0.0.0, supervisor.thrift.port=6628, topology.bolt.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, pacemaker.thread.timeout=10, storm.meta.serialization.delegate=org.apache.storm.serialization.GzipThriftSerializationDelegate, dev.zookeeper.path=/tmp/dev-storm-zookeeper, topology.skip.missing.kryo.registrations=false, drpc.invocations.threads=64, storm.zookeeper.session.timeout=20000, storm.metricstore.rocksdb.metadata_string_cache_capacity=4000, storm.workers.artifacts.dir=workers-artifacts, topology.component.resources.onheap.memory.mb=128.0, storm.log4j2.conf.dir=log4j2, storm.cluster.mode=distributed, ui.childopts=-Xmx768m, task.refresh.poll.secs=10, supervisor.childopts=-Xmx256m, task.credentials.poll.secs=30, storm.health.check.timeout.ms=5000, storm.blobstore.replication.factor=3, worker.profiler.command=flight.bash, storm.messaging.netty.buffer.low.watermark=8388608}
2020-05-25 14:51:41.877 o.a.s.z.LeaderElectorImp main [INFO] Queued up for leader lock.
2020-05-25 14:51:41.907 o.a.s.n.NimbusInfo main-EventThread [INFO] Nimbus figures out its name to 7480-GQY29H2.smarshcorp.com
2020-05-25 14:51:41.929 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] Sync remote assignments and id-info to local
2020-05-25 14:51:41.963 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormcode.ser/7480-GQY29H2.smarshcorp.com:6627-1
2020-05-25 14:51:41.999 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormcode.ser/7480-GQY29H2.smarshcorp.com:6627-1
2020-05-25 14:51:42.015 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormconf.ser/7480-GQY29H2.smarshcorp.com:6627-1
2020-05-25 14:51:42.035 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormjar.jar/7480-GQY29H2.smarshcorp.com:6627-1
2020-05-25 14:51:42.052 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormconf.ser/7480-GQY29H2.smarshcorp.com:6627-1
2020-05-25 14:51:42.081 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormjar.jar/7480-GQY29H2.smarshcorp.com:6627-1
2020-05-25 14:51:42.104 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] active-topology-blobs [Stock-Topology-1-1589133962,word-topology-1-1589738489] local-topology-blobs [word-topology-1-1589738489-stormcode.ser,Stock-Topology-1-1589133962-stormcode.ser,word-topology-1-1589738489-stormconf.ser,Stock-Topology-1-1589133962-stormjar.jar,Stock-Topology-1-1589133962-stormconf.ser,word-topology-1-1589738489-stormjar.jar] diff-topology-blobs []
2020-05-25 14:51:42.239 o.a.s.d.m.ClientMetricsUtils main [INFO] Using statistics reporter plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter
2020-05-25 14:51:42.297 o.a.s.d.m.r.JmxPreparableReporter main [INFO] Preparing...
2020-05-25 14:51:42.322 o.a.s.m.StormMetricsRegistry main [INFO] Started statistics report plugin...
2020-05-25 14:51:42.327 o.a.s.d.n.Nimbus main [INFO] Starting nimbus server for storm version '2.1.0'
2020-05-25 14:51:42.408 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] active-topology-dependencies [] local-blobs [word-topology-1-1589738489-stormcode.ser,Stock-Topology-1-1589133962-stormcode.ser,word-topology-1-1589738489-stormconf.ser,Stock-Topology-1-1589133962-stormjar.jar,Stock-Topology-1-1589133962-stormconf.ser,word-topology-1-1589738489-stormjar.jar] diff-topology-dependencies []
2020-05-25 14:51:42.409 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] Accepting leadership, all active topologies and corresponding dependencies found locally.
2020-05-25 14:51:42.409 o.a.s.z.LeaderListenerCallbackFactory main-EventThread [INFO] 7480-GQY29H2.smarshcorp.com gained leadership.
2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:51:42.619 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:51:45.096 o.a.s.d.n.Nimbus timer [INFO] TRANSITION: word-topology-1-1589738489 GAIN_LEADERSHIP null false
2020-05-25 14:51:45.098 o.a.s.d.n.Nimbus timer [INFO] TRANSITION: Stock-Topology-1-1589133962 GAIN_LEADERSHIP null false
2020-05-25 14:51:52.682 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:51:52.682 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:51:52.684 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:51:52.684 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:51:52.685 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:51:52.686 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:51:52.686 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:51:52.687 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:52:02.736 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:52:12.774 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:12.774 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:12.775 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:12.775 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:52:22.810 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:52:22.811 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:22.811 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
Here is my storm.yml:
storm.zookeeper.servers:
- "127.0.0.1"
nimbus.host: "127.0.0.1"
ui.port: 8081
storm.local.dir: "/Users/anshita.singh/storm/datadir/storm"
supervisor.slot.ports:
-6700
-6701
-6702
-6703
# storm.zookeeper.servers:
# - "server1"
# - "server2"
#
# nimbus.seeds: ["host1", "host2", "host3"]
#
#
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
# - "server1"
# - "server2"
## Metrics Consumers
## max.retain.metric.tuples
## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0.
## whitelist / blacklist
## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'.
## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them.
## - you can specify multiple whitelist / blacklist with regular expression
## expandMapType: expand metric with map type as value to multiple metrics
## - set to true when you would like to apply filter to expanded metrics
## - default value is false which is backward compatible value
## metricNameSeparator: separator between origin metric name and key of entry from map
## - only effective when expandMapType is set to true
## - default value is "."
# topology.metrics.consumer.register:
# - class: "org.apache.storm.metric.LoggingMetricsConsumer"
# max.retain.metric.tuples: 100
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# max.retain.metric.tuples: 100
# whitelist:
# - "execute.*"
# - "^__complete-latency$"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
# expandMapType: true
# metricNameSeparator: "."
## Cluster Metrics Consumers
# storm.cluster.metrics.consumer.register:
# - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer"
# - class: "org.mycompany.MyMetricsConsumer"
# argument:
# - endpoint: "metrics-collector.mycompany.org"
#
# storm.cluster.metrics.consumer.publish.interval.secs: 60
# Event Logger
# topology.event.logger.register:
# - class: "org.apache.storm.metric.FileBasedEventLogger"
# - class: "org.mycompany.MyEventLogger"
# arguments:
# endpoint: "event-logger.mycompany.org"
# Metrics v2 configuration (optional)
#storm.metrics.reporters:
# # Graphite Reporter
# - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"
# daemons:
# - "supervisor"
# - "nimbus"
# - "worker"
# report.period: 60
# report.period.units: "SECONDS"
# graphite.host: "localhost"
# graphite.port: 2003
#
# # Console Reporter
# - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
# daemons:
# - "worker"
# report.period: 10
# report.period.units: "SECONDS"
# filter:
# class: "org.apache.storm.metrics2.filters.RegexFilter"
# expression: ".*my_component.*emitted.*"
Can anyone tell me what configuration I have missed, if any? And please let me know if any else information is needed to debug this?
My environment:
Apache-storm-2.1.0
Apache-zookeeper-3.6.1
Solution:
Run below command:
storm admin remove_corrupt_topologies

Looks like there was some corrupted topologies were there. When I ran this command it fix this issue:
storm admin remove_corrupt_topologies

Related

AMQP Closing all channels from connection when docker run on port 5672

I have run Rabbitmq as a docker compose and it work well with port 15672 on browser, but 5672 not working.
docker-compose
rabbitmq:
image: 'rabbitmq:3-management-alpine'
container_name: rabbitmq
ports:
- '5672:5672'
- '15672:15672'
environment:
- RABBITMQ_NODE_TYPE=stats
- RABBITMQ_NODE_NAME=rabbit#stats
- RABBITMQ_ERL_COOKIE=s3cr3tc00ki3
- RABBITMQ_DEFAULT_USER=rabbitmquser
- RABBITMQ_DEFAULT_PASS=rabbitmquser
volumes:
- '/rabbitmq/data:/var/lib/rabbitmq/'
- '/rabbitmq/log:/var/log/rabbitmq'
spring application.properties
spring.rabbitmq.host = 192.168.1.212
spring.rabbitmq.port = 15672
spring.rabbitmq.username = rabbitmquser
spring.rabbitmq.password = rabbitmquser
error in docker log when enter http://192.168.100.12:5672
2021-04-07 13:30:07.742 [info] <0.731.0> Resetting node maintenance status
2021-04-07 13:31:27.979 [info] <0.1050.0> accepting AMQP connection <0.1050.0> (192.168.2.2:62786 -> 172.19.0.4:5672)
2021-04-07 13:31:28.093 [error] <0.1050.0> closing AMQP connection <0.1050.0> (192.168.2.2:62786 -> 172.19.0.4:5672):
{bad_header,<<"GET / HT">>}
2021-04-07 13:31:28.111 [info] <0.1055.0> Closing all channels from connection '192.168.2.2:62786 -> 172.19.0.4:5672' because it has been closed
2021-04-07 13:31:29.224 [info] <0.1053.0> accepting AMQP connection <0.1053.0> (192.168.2.2:62787 -> 172.19.0.4:5672)
2021-04-07 13:31:29.225 [error] <0.1053.0> closing AMQP connection <0.1053.0> (192.168.2.2:62787 -> 172.19.0.4:5672):
{bad_header,<<"GET / HT">>}
2021-04-07 13:31:29.228 [info] <0.1062.0> Closing all channels from connection '192.168.2.2:62787 -> 172.19.0.4:5672' because it has been closed
2021-04-07 13:31:34.276 [info] <0.1060.0> accepting AMQP connection <0.1060.0> (192.168.2.2:62789 -> 172.19.0.4:5672)
2021-04-07 13:31:34.280 [error] <0.1060.0> closing AMQP connection <0.1060.0> (192.168.2.2:62789 -> 172.19.0.4:5672):
{bad_header,<<"GET / HT">>}
2021-04-07 13:31:34.282 [info] <0.1069.0> Closing all channels from connection '192.168.2.2:62789 -> 172.19.0.4:5672' because it has been closed
2021-04-07 13:31:44.279 [info] <0.1067.0> accepting AMQP connection <0.1067.0> (192.168.2.2:62790 -> 172.19.0.4:5672)
2021-04-07 13:31:44.280 [error] <0.1067.0> closing AMQP connection <0.1067.0> (192.168.2.2:62790 -> 172.19.0.4:5672):
{handshake_timeout,handshake}
2021-04-07 13:31:44.282 [info] <0.1073.0> Closing all channels from connection '192.168.2.2:62790 -> 172.19.0.4:5672' because it has been closed
error in spring
declaring queue for inbound: springCloudBus.anonymous.PRr6HWmwTqGU2akit5Rc9Q, bound to: springCloudBus
Attempting to connect to: [192.168.1.212:15672]
Channel 'springCloudBus.anonymous.PRr6HWmwTqGU2akit5Rc9Q.errors' has 1 subscriber(s).
Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection timed out: no further information
onsumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection timed out: no further information
Change:
spring.rabbitmq.port = 5672

java.lang.NullPointerException: at utilities2.ExcelUtility.findCells(ExcelUtility.java:70)

I am trying to run my test cases from command Line using maven command.
Below is the error message I am getting. I was able to run the test case when I right click on my XML file and run it with TestNG suites without a problem.
The problem I have is that I can't execute the test cases that are inside my pom.xml. This problem only occurs whenever I am using excel for data-driven.I am suspecting my poi might not be compatible with the maven version or the chrome gecko might not be compatible with the maven version. I am just guessing and that is why I need your help.
C:\
Blockquote
mvn test -PClientAlert
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building cehproject 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # cehproject
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory C:\Users\akinrins\workspace\cehprojec
t\src\main\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) # cehproject ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) # ce
hproject ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory C:\Users\akinrins\workspace\cehprojec
t\src\test\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) # cehproj
ect ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) # cehproject ---
[INFO] Surefire report directory: C:\Users\akinrins\workspace\cehproject\target\
surefire-reports
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running TestSuite
Starting ChromeDriver 2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41) on
port 8611
Only local connections are allowed.
Jan 16, 2018 2:37:04 PM org.openqa.selenium.remote.ProtocolHandshake createSessi
on
INFO: Attempting bi-dialect session, assuming Postel's Law holds true on the rem
ote end
log4j:WARN No appenders could be found for logger (org.apache.http.client.protoc
ol.RequestAddCookies).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more in
fo.
Jan 16, 2018 2:37:06 PM org.openqa.selenium.remote.ProtocolHandshake createSessi
on
INFO: Detected dialect: OSS
java.lang.NullPointerException
at utilities2.ExcelUtility.findCells(ExcelUtility.java:70)
at utilities2.ExcelUtility.getTestData(ExcelUtility.java:40)
at AlertTesting.ClientAlertTest.dataProvider(ClientAlertTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocat
ionHelper.java:108)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocat
ionHelper.java:55)
at org.testng.internal.MethodInvocationHelper.invokeMethodNoCheckedExcep
tion(MethodInvocationHelper.java:45)
at org.testng.internal.MethodInvocationHelper.invokeDataProvider(MethodI
nvocationHelper.java:115)
at org.testng.internal.Parameters.handleParameters(Parameters.java:509)
at org.testng.internal.Invoker.handleParameters(Invoker.java:1308)
at org.testng.internal.Invoker.createParameters(Invoker.java:1036)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1126)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWork
er.java:126)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.privateRun(TestRunner.java:744)
at org.testng.TestRunner.run(TestRunner.java:602)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:380)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340)
at org.testng.SuiteRunner.run(SuiteRunner.java:289)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1226)
at org.testng.TestNG.runSuites(TestNG.java:1144)
at org.testng.TestNG.run(TestNG.java:1115)
at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.ja
va:295)
at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXml
TestSuite.java:84)
at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider
.java:90)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
ssLoader(ForkedBooter.java:203)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
edBooter.java:155)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
103)
[Utils] [ERROR] [Error] org.testng.TestNGException:
Data Provider public java.lang.Object[][] AlertTesting.ClientAlertTest.dataProvi
der() must return either Object[][] or Iterator<Object>[], not class [[Lja
va.lang.Object;
at org.testng.internal.MethodInvocationHelper.invokeDataProvider(MethodI
nvocationHelper.java:137)
at org.testng.internal.Parameters.handleParameters(Parameters.java:509)
at org.testng.internal.Invoker.handleParameters(Invoker.java:1308)
at org.testng.internal.Invoker.createParameters(Invoker.java:1036)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1126)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWork
er.java:126)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.privateRun(TestRunner.java:744)
at org.testng.TestRunner.run(TestRunner.java:602)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:380)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340)
at org.testng.SuiteRunner.run(SuiteRunner.java:289)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1226)
at org.testng.TestNG.runSuites(TestNG.java:1144)
at org.testng.TestNG.run(TestNG.java:1115)
at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.ja
va:295)
at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXml
TestSuite.java:84)
at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider
.java:90)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
ssLoader(ForkedBooter.java:203)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
edBooter.java:155)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
103)
Tests run: 6, Failures: 2, Errors: 0, Skipped: 4, Time elapsed: 9.846 sec <<< FA
ILURE! - in TestSuite
ClientAlertOne(AlertTesting.ClientAlertTest) Time elapsed: 9.712 sec <<< FAILU
RE!
java.lang.NoSuchMethodError: org.apache.poi.util.POILogger.log(ILjava/lang/Objec
t;)V
at org.apache.poi.openxml4j.opc.PackageRelationshipCollection.parseRelat
ionshipsPart(PackageRelationshipCollection.java:304)
at org.apache.poi.openxml4j.opc.PackageRelationshipCollection.<init>(Pac
kageRelationshipCollection.java:156)
at org.apache.poi.openxml4j.opc.PackageRelationshipCollection.<init>(Pac
kageRelationshipCollection.java:124)
at org.apache.poi.openxml4j.opc.PackagePart.loadRelationships(PackagePar
t.java:559)
at org.apache.poi.openxml4j.opc.PackagePart.<init>(PackagePart.java:112)
at org.apache.poi.openxml4j.opc.PackagePart.<init>(PackagePart.java:83)
at org.apache.poi.openxml4j.opc.PackagePart.<init>(PackagePart.java:128)
at org.apache.poi.openxml4j.opc.ZipPackagePart.<init>(ZipPackagePart.jav
a:78)
at org.apache.poi.openxml4j.opc.ZipPackage.getPartsImpl(ZipPackage.java:
218)
at org.apache.poi.openxml4j.opc.OPCPackage.getParts(OPCPackage.java:662)
at org.apache.poi.openxml4j.opc.OPCPackage.open(OPCPackage.java:269)
at org.apache.poi.util.PackageHelper.open(PackageHelper.java:39)
at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(XSSFWorkbook.java:2
04)
at utilities2.ExcelUtility.setExcelFile(ExcelUtility.java:27)
at AlertTesting.ClientAlertTest.ClientAlertOne(ClientAlertTest.java:63)
TC_ClientAlertTest(AlertTesting.ClientAlertTest) Time elapsed: 9.743 sec <<< F
AILURE!
org.testng.TestNGException:
Data Provider public java.lang.Object[][] AlertTesting.ClientAlertTest.dataProvi
der() must return either Object[][] or Iterator<Object>[], not class [[Ljava.lan
g.Object;
at org.testng.internal.MethodInvocationHelper.invokeDataProvider(MethodI
nvocationHelper.java:137)
at org.testng.internal.Parameters.handleParameters(Parameters.java:509)
at org.testng.internal.Invoker.handleParameters(Invoker.java:1308)
at org.testng.internal.Invoker.createParameters(Invoker.java:1036)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1126)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWork
er.java:126)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.privateRun(TestRunner.java:744)
at org.testng.TestRunner.run(TestRunner.java:602)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:380)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340)
at org.testng.SuiteRunner.run(SuiteRunner.java:289)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1226)
at org.testng.TestNG.runSuites(TestNG.java:1144)
at org.testng.TestNG.run(TestNG.java:1115)
at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.ja
va:295)
at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXml
TestSuite.java:84)
at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider
.java:90)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
ssLoader(ForkedBooter.java:203)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
edBooter.java:155)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
103)
Results :
Failed tests:
ClientAlertTest.ClientAlertOne:63 » NoSuchMethod org.apache.poi.util.POILogger
...
ClientAlertTest.TC_ClientAlertTest » TestNG
Data Provider public java.lang.Ob...
Tests run: 6, Failures: 2, Errors: 0, Skipped: 4
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.268 s
[INFO] Finished at: 2018-01-16T14:37:13-05:00
[INFO] Final Memory: 13M/309M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.
18.1:test (default-test) on project cehproject: There are test failures.
[ERROR]
[ERROR] Please refer to C:\Users\akinrins\workspace\cehproject\target\surefire-r
eports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
`enter code here`ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureExc
eption

Apache storm Supervisor routinely shutting down worker

I made topology in Apache Storm(0.9.6) with kafka-storm, zookeeper(3.4.6)
(3 zookeeper each node, and 3 supervisor each node. operate 3 topology)
I add 2 storm&zookeeper nodes and change topology.worker configuration 3 to 5.
But after 2 nodes, storm supervisor routinely shutting down worker. Checked with iostat command, read and write throughput is under 1mb.
In supervisor log, show like below.
2016-10-19T15:07:38.904+0900 b.s.d.supervisor [INFO] Shutting down and clearing state for id ee13ada9-641e-463a-9be5-f3ed66fdb8f3. Current supervisor time: 1476857258. State: :timed-out, Heartbeat: #backtype.storm.daemon.common.WorkerHeartbeat{:time-secs 1476857226, :storm-id "top3-17-1476839721", :executors #{[36 36] [6 6] [11 11] [16 16] [21 21] [26 26] [31 31] [-1 -1] [1 1]}, :port 6701}
2016-10-19T15:07:38.905+0900 b.s.d.supervisor [INFO] Shutting down b278933f-f9c7-4189-b615-1d70c7988f17:ee13ada9-641e-463a-9be5-f3ed66fdb8f3
2016-10-19T15:07:38.907+0900 b.s.util [INFO] Error when trying to kill 9306. Process is probably already dead.
2016-10-19T15:07:44.948+0900 b.s.d.supervisor [INFO] Shutting down and clearing state for id d6df820a-7c29-4bff-a606-9e8e36fafab2. Current supervisor time: 1476857264. State: :disallowed, Heartbeat: #backtype.storm.daemon.common.WorkerHeartbeat{:time-secs 1476857264, :storm-id "top3-17-1476839721", :executors #{[-1 -1]}, :port 6701}
2016-10-19T15:07:44.949+0900 b.s.d.supervisor [INFO] Shutting down b278933f-f9c7-4189-b615-1d70c7988f17:d6df820a-7c29-4bff-a606-9e8e36fafab2
2016-10-19T15:07:45.954+0900 b.s.util [INFO] Error when trying to kill 11171. Process is probably already dead.
2016-10-19T15:07:45.954+0900 b.s.d.supervisor [INFO] Shut down b278933f-f9c7-4189-b615-1d70c7988f17:d6df820a-7c29-4bff-a606-9e8e36fafab2
And in zookeeper.out log... show like below(xxx ip address is another storm zookeeper address)
2016-09-20 02:31:06,031 [myid:5] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /xxx.xxx.xxx.xxx:39426 which had sessionid 0x5574372bbf00004
2016-09-20 02:31:08,116 [myid:5] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x5574372bbf0000a, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
I don't know why worker is down routinely. How can i fix it? Is this something wrong?
Oh, my zookeeper and storm configuration is like below
zoo.cfg(same all nodes)
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/log/zkdata/1
clientPort=2181
server.1=storm01:2888:3888
server.2=storm02:2888:3888
server.3=storm03:2888:3888
server.4=storm04:2888:3888
server.5=storm05:2888:3888
autopurge.purgeInterval=1
storm.yaml
storm.zookeeper.servers:
- "storm01"
- "storm02"
- "storm03"
- "storm04"
- "storm05"
storm.zookeeper.port: 2181
zookeeper.multiple.setup:
follower.port:2888
election.port:3888
nimbus.host: "storm01"
storm.supervisor.hosts:
- "storm01"
- "storm02"
- "storm03"
- "storm04"
- "storm05"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
storm.local.dir: /log/storm-data
worker.childopts: "-Xmx5120m -Djava.net.preferIPv4Stack=true"
topology.workers: 5
storm.log.dir: /log/storm-log

How to resolve missing class org.slf4j.helpers.MarkerIgnoringBase when building Kuali Student

When building Kuali Student from https://github.com/kuali-student/ks-development using the command mvn -skipTests=true -Dmaven.failsafe.skip=true clean install I recieve the following error:
[INFO] Reactor Summary:
[INFO]
[INFO] KS DB .............................................. SUCCESS [ 7.816 s]
[INFO] KS DB Validation ................................... SUCCESS [ 0.573 s]
[INFO] KS Impex ........................................... SUCCESS [ 0.080 s]
[INFO] KS LUM Rice ........................................ SUCCESS [ 6.436 s]
[INFO] KS LUM UI Common ................................... SUCCESS [ 1.885 s]
[INFO] KS LUM Program ..................................... SUCCESS [ 1.785 s]
[INFO] KS LUM UI .......................................... SUCCESS [ 2.505 s]
[INFO] KS Enroll UI ....................................... FAILURE [ 0.502 s]
[INFO] KS CM KRAD ......................................... SKIPPED
[INFO] KS Security ........................................ SKIPPED
[INFO] KS Standard Security ............................... SKIPPED
[INFO] KS Security Token Service .......................... SKIPPED
[INFO] KS Common Kitchen Sink ............................. SKIPPED
[INFO] KS Common Web ...................................... SKIPPED
[INFO] KS Curriculum Management Deployment Resources ...... SKIPPED
[INFO] KS Enroll Deployment Resources ..................... SKIPPED
[INFO] KS Enroll Rice ..................................... SKIPPED
[INFO] KS Web ............................................. SKIPPED
[INFO] KS with Rice Bundled ............................... SKIPPED
[INFO] KS with Rice Embedded .............................. SKIPPED
[INFO] KS Rice Standalone ................................. SKIPPED
[INFO] KS Metro ........................................... SKIPPED
[INFO] KS Eclipselink Pom ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 23.640 s
[INFO] Finished at: 2015-11-04T08:15:37-10:00
[INFO] Final Memory: 148M/850M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:0.0.16:install-node-and-npm (install node and npm) on project ks-enroll-ui: Execution install node and npm of goal com.github.eirslett:frontend-maven-plugin:0.0.16:install-node-and-npm failed: A required class was missing while executing com.github.eirslett:frontend-maven-plugin:0.0.16:install-node-and-npm: org/slf4j/helpers/MarkerIgnoringBase
[ERROR] -----------------------------------------------------
[ERROR] realm = plugin>com.github.eirslett:frontend-maven-plugin:0.0.16
[ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
[ERROR] urls[0] = file:/j/m2/ks-development/com/github/eirslett/frontend-maven-plugin/0.0.16/frontend-maven-plugin-0.0.16.jar
[ERROR] urls[1] = file:/j/m2/ks-development/com/github/eirslett/frontend-plugin-core/0.0.16/frontend-plugin-core-0.0.16.jar
[ERROR] urls[2] = file:/j/m2/ks-development/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar
[ERROR] urls[3] = file:/j/m2/ks-development/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar
[ERROR] urls[4] = file:/j/m2/ks-development/org/apache/commons/commons-compress/1.5/commons-compress-1.5.jar
[ERROR] urls[5] = file:/j/m2/ks-development/org/tukaani/xz/1.2/xz-1.2.jar
[ERROR] urls[6] = file:/j/m2/ks-development/commons-io/commons-io/1.3.2/commons-io-1.3.2.jar
[ERROR] urls[7] = file:/j/m2/ks-development/org/apache/httpcomponents/httpclient/4.3.1/httpclient-4.3.1.jar
[ERROR] urls[8] = file:/j/m2/ks-development/org/apache/httpcomponents/httpcore/4.3/httpcore-4.3.jar
[ERROR] urls[9] = file:/j/m2/ks-development/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar
[ERROR] urls[10] = file:/j/m2/ks-development/commons-codec/commons-codec/1.6/commons-codec-1.6.jar
[ERROR] urls[11] = file:/j/m2/ks-development/org/codehaus/plexus/plexus-utils/3.0.10/plexus-utils-3.0.10.jar
[ERROR] urls[12] = file:/j/m2/ks-development/javax/enterprise/cdi-api/1.0/cdi-api-1.0.jar
[ERROR] urls[13] = file:/j/m2/ks-development/javax/annotation/jsr250-api/1.0/jsr250-api-1.0.jar
[ERROR] urls[14] = file:/j/m2/ks-development/com/google/guava/guava/10.0.1/guava-10.0.1.jar
[ERROR] urls[15] = file:/j/m2/ks-development/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar
[ERROR] urls[16] = file:/j/m2/ks-development/org/sonatype/sisu/sisu-guice/3.1.0/sisu-guice-3.1.0-no_aop.jar
[ERROR] urls[17] = file:/j/m2/ks-development/aopalliance/aopalliance/1.0/aopalliance-1.0.jar
[ERROR] urls[18] = file:/j/m2/ks-development/org/eclipse/sisu/org.eclipse.sisu.inject/0.0.0.M2a/org.eclipse.sisu.inject-0.0.0.M2a.jar
[ERROR] urls[19] = file:/j/m2/ks-development/asm/asm/3.3.1/asm-3.3.1.jar
[ERROR] urls[20] = file:/j/m2/ks-development/org/codehaus/plexus/plexus-component-annotations/1.5.5/plexus-component-annotations-1.5.5.jar
[ERROR] urls[21] = file:/j/m2/ks-development/org/apache/maven/plugin-tools/maven-plugin-annotations/3.2/maven-plugin-annotations-3.2.jar
[ERROR] urls[22] = file:/j/m2/ks-development/com/googlecode/slf4j-maven-plugin-log/slf4j-maven-plugin-log/1.0.0/slf4j-maven-plugin-log-1.0.0.jar
[ERROR] Number of foreign imports: 1
[ERROR] import: Entry[import from realm ClassRealm[project>org.kuali.student:student:2.1.1-FR2-M1-SNAPSHOT, parent: ClassRealm[maven.api, parent: null]]]
[ERROR]
[ERROR] -----------------------------------------------------: org.slf4j.helpers.MarkerIgnoringBase
...
Appears to be related to https://issues.apache.org/jira/browse/MNG-5787 which has been fixed for Maven 3.3.8 (not yet released at the time of this writting). One of the comments suggests using Maven 3.2.5 as a work around which resolved this error for me.

Storm HDFS Bolt not working

So I've just started working with storm and trying to understand it. I am trying to connect to the kafka topic, read the data and write it to the HDFS bolt.
At first I created it without the shuffleGrouping("stormspout") and my Storm UI was showing that the spout was consuming the data from the topic but nothing was being written to the bolt (except for the empty files it was creating on the HDFS) . I then added shuffleGrouping("stormspout"); and now the bolt appears to be giving an error. If anyone can help with this, I will really appreciate it.
Thanks,
Colman
Error
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Read partition information from: /storm/partition_0 --> null
2015-04-13 00:02:58 s.k.PartitionManager [INFO] No partition information found, using configuration to determine offset
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Last commit offset from zookeeper: 0
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Commit offset 0 is more than 9223372036854775807 behind, resetting to startOffsetTime=-2
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Starting Kafka 192.168.134.137:0 from offset 0
2015-04-13 00:02:58 s.k.ZkCoordinator [INFO] Task [1/1] Finished refreshing
2015-04-13 00:02:58 b.s.d.task [INFO] Emitting: stormspout default [colmanblah]
2015-04-13 00:02:58 b.s.d.executor [INFO] TRANSFERING tuple TASK: 2 TUPLE: source: stormspout:3, stream: default, id: {462820364856350458=5573117062061876630}, [colmanblah]
2015-04-13 00:02:58 b.s.d.task [INFO] Emitting: stormspout __ack_init [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] TRANSFERING tuple TASK: 1 TUPLE: source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] Processing received message FOR 1 TUPLE: source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: TUPLE: source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] Execute done TUPLE source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3] TASK: 1 DELTA:
2015-04-13 00:02:59 b.s.d.executor [INFO] Prepared bolt stormbolt:(2)
2015-04-13 00:02:59 b.s.d.executor [INFO] Processing received message FOR 2 TUPLE: source: stormspout:3, stream: default, id: {462820364856350458=5573117062061876630}, [colmanblah]
2015-04-13 00:02:59 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: java.lang.NullPointerException
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.NullPointerException: null
at org.apache.storm.hdfs.bolt.HdfsBolt.execute(HdfsBolt.java:92) ~[storm-hdfs-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
... 6 common frames omitted
2015-04-08 04:26:39 b.s.d.executor [ERROR]
java.lang.RuntimeException: java.lang.NullPointerException
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.NullPointerException: null
at org.apache.storm.hdfs.bolt.HdfsBolt.execute(HdfsBolt.java:92) ~[storm-hdfs-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
Code:
TopologyBuilder builder = new TopologyBuilder();
Config config = new Config();
//config.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 7000);
config.setNumWorkers(1);
config.setDebug(true);
//LocalCluster cluster = new LocalCluster();
//zookeeper
BrokerHosts brokerHosts = new ZkHosts("192.168.134.137:2181", "/brokers");
//spout
SpoutConfig spoutConfig = new SpoutConfig(brokerHosts, "myTopic", "/kafkastorm", "KafkaSpout");
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
spoutConfig.forceFromStart = true;
builder.setSpout("stormspout", new KafkaSpout(spoutConfig),4);
//bolt
SyncPolicy syncPolicy = new CountSyncPolicy(10); //Synchronize data buffer with the filesystem every 10 tuples
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, Units.MB); // Rotate data files when they reach five MB
FileNameFormat fileNameFormat = new DefaultFileNameFormat().withPath("/stormstuff"); // Use default, Storm-generated file names
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://192.168.134.137:8020")//54310
.withSyncPolicy(syncPolicy)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),2
).shuffleGrouping("stormspout");
//cluster.submitTopology("ColmansStormTopology", config, builder.createTopology());
try {
StormSubmitter.submitTopologyWithProgressBar("ColmansStormTopology", config, builder.createTopology());
} catch (AlreadyAliveException e) {
e.printStackTrace();
} catch (InvalidTopologyException e) {
e.printStackTrace();
}
POM.XML dependencies
<dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hdfs</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
First of all try to emit the values from the execute method, if you are emitting from different worker thread, then let all the worker threads to feed the data in LinkedBlockingQueue and only a single worker thread will allow to emit the values from LinkedBlockingQueue.
Secondly, try to Set Config.setMaxSpoutPending to some value and again try to run the code, and check if the scenario persist try to reduce that value.
Reference - Config.TOPOLOGY_MAX_SPOUT_PENDING: This sets the maximum number of spout tuples that can be pending on a single spout task at once (pending means the tuple has not been acked or failed yet). It is highly recommended you set this config to prevent queue explosion.
I eventually figured this out by going through the storm source code.
I wasn't setting
RecordFormat format = new DelimitedRecordFormat().withFieldDelimiter("|");
and including it like
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://192.168.134.137:8020")//54310
.withSyncPolicy(syncPolicy)
.withRecordFormat(format)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),1
).shuffleGrouping("stormspout");
In the HDFSBolt.Java class, it tries to use this and basically falls over if its not set. That was where the NPE was coming from.
Hope this helps someone else out, ensure you have set all the bits that are required in this class. A more useful error message such as "RecordFormat not set" would be nice....

Resources