Storm HDFS Bolt not working - hadoop
So I've just started working with storm and trying to understand it. I am trying to connect to the kafka topic, read the data and write it to the HDFS bolt.
At first I created it without the shuffleGrouping("stormspout") and my Storm UI was showing that the spout was consuming the data from the topic but nothing was being written to the bolt (except for the empty files it was creating on the HDFS) . I then added shuffleGrouping("stormspout"); and now the bolt appears to be giving an error. If anyone can help with this, I will really appreciate it.
Thanks,
Colman
Error
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Read partition information from: /storm/partition_0 --> null
2015-04-13 00:02:58 s.k.PartitionManager [INFO] No partition information found, using configuration to determine offset
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Last commit offset from zookeeper: 0
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Commit offset 0 is more than 9223372036854775807 behind, resetting to startOffsetTime=-2
2015-04-13 00:02:58 s.k.PartitionManager [INFO] Starting Kafka 192.168.134.137:0 from offset 0
2015-04-13 00:02:58 s.k.ZkCoordinator [INFO] Task [1/1] Finished refreshing
2015-04-13 00:02:58 b.s.d.task [INFO] Emitting: stormspout default [colmanblah]
2015-04-13 00:02:58 b.s.d.executor [INFO] TRANSFERING tuple TASK: 2 TUPLE: source: stormspout:3, stream: default, id: {462820364856350458=5573117062061876630}, [colmanblah]
2015-04-13 00:02:58 b.s.d.task [INFO] Emitting: stormspout __ack_init [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] TRANSFERING tuple TASK: 1 TUPLE: source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] Processing received message FOR 1 TUPLE: source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: TUPLE: source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3]
2015-04-13 00:02:58 b.s.d.executor [INFO] Execute done TUPLE source: stormspout:3, stream: __ack_init, id: {}, [462820364856350458 5573117062061876630 3] TASK: 1 DELTA:
2015-04-13 00:02:59 b.s.d.executor [INFO] Prepared bolt stormbolt:(2)
2015-04-13 00:02:59 b.s.d.executor [INFO] Processing received message FOR 2 TUPLE: source: stormspout:3, stream: default, id: {462820364856350458=5573117062061876630}, [colmanblah]
2015-04-13 00:02:59 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: java.lang.NullPointerException
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.NullPointerException: null
at org.apache.storm.hdfs.bolt.HdfsBolt.execute(HdfsBolt.java:92) ~[storm-hdfs-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
... 6 common frames omitted
2015-04-08 04:26:39 b.s.d.executor [ERROR]
java.lang.RuntimeException: java.lang.NullPointerException
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.NullPointerException: null
at org.apache.storm.hdfs.bolt.HdfsBolt.execute(HdfsBolt.java:92) ~[storm-hdfs-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
Code:
TopologyBuilder builder = new TopologyBuilder();
Config config = new Config();
//config.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 7000);
config.setNumWorkers(1);
config.setDebug(true);
//LocalCluster cluster = new LocalCluster();
//zookeeper
BrokerHosts brokerHosts = new ZkHosts("192.168.134.137:2181", "/brokers");
//spout
SpoutConfig spoutConfig = new SpoutConfig(brokerHosts, "myTopic", "/kafkastorm", "KafkaSpout");
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
spoutConfig.forceFromStart = true;
builder.setSpout("stormspout", new KafkaSpout(spoutConfig),4);
//bolt
SyncPolicy syncPolicy = new CountSyncPolicy(10); //Synchronize data buffer with the filesystem every 10 tuples
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, Units.MB); // Rotate data files when they reach five MB
FileNameFormat fileNameFormat = new DefaultFileNameFormat().withPath("/stormstuff"); // Use default, Storm-generated file names
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://192.168.134.137:8020")//54310
.withSyncPolicy(syncPolicy)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),2
).shuffleGrouping("stormspout");
//cluster.submitTopology("ColmansStormTopology", config, builder.createTopology());
try {
StormSubmitter.submitTopologyWithProgressBar("ColmansStormTopology", config, builder.createTopology());
} catch (AlreadyAliveException e) {
e.printStackTrace();
} catch (InvalidTopologyException e) {
e.printStackTrace();
}
POM.XML dependencies
<dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hdfs</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
First of all try to emit the values from the execute method, if you are emitting from different worker thread, then let all the worker threads to feed the data in LinkedBlockingQueue and only a single worker thread will allow to emit the values from LinkedBlockingQueue.
Secondly, try to Set Config.setMaxSpoutPending to some value and again try to run the code, and check if the scenario persist try to reduce that value.
Reference - Config.TOPOLOGY_MAX_SPOUT_PENDING: This sets the maximum number of spout tuples that can be pending on a single spout task at once (pending means the tuple has not been acked or failed yet). It is highly recommended you set this config to prevent queue explosion.
I eventually figured this out by going through the storm source code.
I wasn't setting
RecordFormat format = new DelimitedRecordFormat().withFieldDelimiter("|");
and including it like
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://192.168.134.137:8020")//54310
.withSyncPolicy(syncPolicy)
.withRecordFormat(format)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),1
).shuffleGrouping("stormspout");
In the HDFSBolt.Java class, it tries to use this and basically falls over if its not set. That was where the NPE was coming from.
Hope this helps someone else out, ensure you have set all the bits that are required in this class. A more useful error message such as "RecordFormat not set" would be nice....
Related
Getting "No available slots for topology" error for storm nimbus
I am new to the apache-storm. I am trying to set up a local storm cluster. I have setup zookeeper using the following link and when I start zookeeper it's running fine.But when I start nimbus using start nimbus command I am seeing an error No slot available for topology in the nimbus.log file. My nimbus.log file: SendThread(kubernetes.docker.internal:2181) [INFO] Opening socket connection to server kubernetes.docker.internal/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2020-05-25 14:51:37.260 o.a.s.z.ClientZookeeper main [INFO] Starting ZK Curator 2020-05-25 14:51:37.260 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl main [INFO] Starting 2020-05-25 14:51:37.261 o.a.s.s.o.a.z.ZooKeeper main [INFO] Initiating client connection, connectString=127.0.0.1:2181/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#35beb15e 2020-05-25 14:51:37.261 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Socket connection established to kubernetes.docker.internal/127.0.0.1:2181, initiating session 2020-05-25 14:51:37.263 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl main [INFO] Default schema 2020-05-25 14:51:37.264 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Opening socket connection to server kubernetes.docker.internal/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2020-05-25 14:51:37.265 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Session establishment complete on server kubernetes.docker.internal/127.0.0.1:2181, sessionid = 0x1000ebc40020006, negotiated timeout = 20000 2020-05-25 14:51:37.266 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Socket connection established to kubernetes.docker.internal/127.0.0.1:2181, initiating session 2020-05-25 14:51:37.266 o.a.s.s.o.a.c.f.s.ConnectionStateManager main-EventThread [INFO] State change: CONNECTED 2020-05-25 14:51:37.270 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Session establishment complete on server kubernetes.docker.internal/127.0.0.1:2181, sessionid = 0x1000ebc40020007, negotiated timeout = 20000 2020-05-25 14:51:37.271 o.a.s.s.o.a.c.f.s.ConnectionStateManager main-EventThread [INFO] State change: CONNECTED 2020-05-25 14:51:41.791 o.a.s.n.NimbusInfo main [INFO] Nimbus figures out its name to 7480-GQY29H2.smarshcorp.com 2020-05-25 14:51:41.817 o.a.s.d.n.Nimbus main [INFO] Starting Nimbus with conf {storm.messaging.netty.min_wait_ms=100, topology.backpressure.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.resource.isolation.plugin=org.apache.storm.container.cgroup.CgroupManager, storm.zookeeper.auth.user=null, storm.messaging.netty.buffer_size=5242880, storm.exhibitor.port=8080, topology.bolt.wait.progressive.level1.count=1, pacemaker.auth.method=NONE, ui.filter=null, worker.profiler.enabled=false, executor.metrics.frequency.secs=60, supervisor.thrift.threads=16, ui.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, supervisor.supervisors.commands=[], supervisor.queue.size=128, logviewer.cleanup.age.mins=10080, topology.tuple.serializer=org.apache.storm.serialization.types.ListDelegateSerializer, storm.cgroup.memory.enforcement.enable=false, drpc.port=3772, topology.max.spout.pending=null, topology.transfer.buffer.size=1000, nimbus.worker.heartbeats.recovery.strategy.class=org.apache.storm.nimbus.TimeOutWorkerHeartbeatsRecoveryStrategy, worker.metrics={CGroupMemory=org.apache.storm.metric.cgroup.CGroupMemoryUsage, CGroupMemoryLimit=org.apache.storm.metric.cgroup.CGroupMemoryLimit, CGroupCpu=org.apache.storm.metric.cgroup.CGroupCpu, CGroupCpuGuarantee=org.apache.storm.metric.cgroup.CGroupCpuGuarantee}, logviewer.port=8000, worker.childopts=-Xmx%HEAP-MEM%m -XX:+PrintGCDetails -Xloggc:artifacts/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=artifacts/heapdump, topology.component.cpu.pcore.percent=10.0, storm.daemon.metrics.reporter.plugins=[org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter], blacklist.scheduler.resume.time.secs=1800, drpc.childopts=-Xmx768m, nimbus.task.launch.secs=120, logviewer.childopts=-Xmx128m, storm.supervisor.hard.memory.limit.overage.mb=2024, storm.zookeeper.servers=[127.0.0.1], storm.messaging.transport=org.apache.storm.messaging.netty.Context, storm.messaging.netty.authentication=false, topology.localityaware.higher.bound=0.8, storm.cgroup.memory.limit.tolerance.margin.mb=0.0, storm.cgroup.hierarchy.name=storm, storm.metricprocessor.class=org.apache.storm.metricstore.NimbusMetricProcessor, topology.kryo.factory=org.apache.storm.serialization.DefaultKryoFactory, nimbus.assignments.service.threads=10, worker.heap.memory.mb=768, storm.network.topography.plugin=org.apache.storm.networktopography.DefaultRackDNSToSwitchMapping, supervisor.slots.ports=[6700, 6701, 6702, 6703], topology.stats.sample.rate=0.05, storm.local.dir=/Users/anshita.singh/storm/datadir/storm, topology.backpressure.wait.park.microsec=100, topology.ras.constraint.max.state.search=10000, topology.testing.always.try.serialize=false, nimbus.assignments.service.thread.queue.size=100, storm.principal.tolocal=org.apache.storm.security.auth.DefaultPrincipalToLocal, java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/lib64, nimbus.local.assignments.backend.class=org.apache.storm.assignments.InMemoryAssignmentBackend, worker.gc.childopts=, storm.group.mapping.service.cache.duration.secs=120, topology.multilang.serializer=org.apache.storm.multilang.JsonSerializer, drpc.request.timeout.secs=600, nimbus.blobstore.class=org.apache.storm.blobstore.LocalFsBlobStore, topology.state.synchronization.timeout.secs=60, topology.bolt.wait.progressive.level2.count=1000, topology.worker.shared.thread.pool.size=4, topology.executor.receive.buffer.size=32768, pacemaker.servers=[], supervisor.monitor.frequency.secs=3, storm.nimbus.retry.times=5, topology.transfer.batch.size=1, transactional.zookeeper.port=null, storm.auth.simple-white-list.users=[], topology.scheduler.strategy=org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy, storm.zookeeper.port=2181, storm.zookeeper.retry.intervalceiling.millis=30000, storm.cluster.state.store=org.apache.storm.cluster.ZKStateStorageFactory, nimbus.thrift.port=6627, blacklist.scheduler.tolerance.count=3, nimbus.thrift.threads=64, supervisor.supervisors=[], nimbus.seeds=[localhost], supervisor.slot.ports=-6700 -6701 -6702 -6703, storm.cluster.metrics.consumer.publish.interval.secs=60, logviewer.filter.params=null, topology.min.replication.count=1, nimbus.blobstore.expiration.secs=600, storm.group.mapping.service=org.apache.storm.security.auth.ShellBasedGroupsMapping, storm.nimbus.retry.interval.millis=2000, topology.max.task.parallelism=null, topology.backpressure.wait.progressive.level2.count=1000, drpc.https.keystore.password=*****, resource.aware.scheduler.constraint.max.state.search=100000, supervisor.heartbeat.frequency.secs=5, nimbus.credential.renewers.freq.secs=600, storm.supervisor.medium.memory.grace.period.ms=30000, storm.thrift.transport=org.apache.storm.security.auth.SimpleTransportPlugin, storm.cgroup.hierarchy.dir=/cgroup/storm_resources, storm.zookeeper.auth.password=null, ui.port=8081, drpc.authorizer.acl.strict=false, topology.message.timeout.secs=30, topology.error.throttle.interval.secs=10, topology.backpressure.check.millis=50, drpc.https.keystore.type=JKS, supervisor.memory.capacity.mb=4096.0, storm.metricstore.class=org.apache.storm.metricstore.rocksdb.RocksDbStore, drpc.authorizer.acl.filename=drpc-auth-acl.yaml, topology.builtin.metrics.bucket.size.secs=60, topology.spout.wait.park.microsec=100, storm.local.mode.zmq=false, pacemaker.client.max.threads=2, ui.header.buffer.bytes=4096, topology.shellbolt.max.pending=100, topology.serialized.message.size.metrics=false, drpc.max_buffer_size=1048576, drpc.disable.http.binding=true, storm.codedistributor.class=org.apache.storm.codedistributor.LocalFileSystemCodeDistributor, worker.profiler.childopts=-XX:+UnlockCommercialFeatures -XX:+FlightRecorder, nimbus.supervisor.timeout.secs=60, storm.supervisor.cgroup.rootdir=storm, topology.worker.max.heap.size.mb=768.0, storm.zookeeper.root=/storm, topology.disable.loadaware.messaging=false, storm.supervisor.hard.memory.limit.multiplier=2.0, nimbus.topology.validator=org.apache.storm.nimbus.DefaultTopologyValidator, worker.heartbeat.frequency.secs=1, storm.messaging.netty.max_wait_ms=1000, topology.backpressure.wait.progressive.level1.count=1, topology.max.error.report.per.interval=5, nimbus.thrift.max_buffer_size=1048576, storm.metricstore.rocksdb.location=storm_rocks, storm.supervisor.low.memory.threshold.mb=1024, pacemaker.max.threads=50, ui.pagination=20, ui.disable.http.binding=true, supervisor.blobstore.download.max_retries=3, topology.enable.message.timeouts=true, logviewer.disable.http.binding=true, storm.messaging.netty.transfer.batch.size=262144, topology.spout.wait.progressive.level2.count=0, blacklist.scheduler.strategy=org.apache.storm.scheduler.blacklist.strategies.DefaultBlacklistStrategy, storm.metricstore.rocksdb.retention_hours=240, supervisor.run.worker.as.user=false, storm.messaging.netty.client_worker_threads=1, topology.tasks=null, supervisor.thrift.socket.timeout.ms=5000, storm.group.mapping.service.params=null, drpc.http.port=3774, transactional.zookeeper.root=/transactional, supervisor.blobstore.download.thread.count=5, logviewer.filter=null, pacemaker.kerberos.users=[], topology.spout.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.blobstore.inputstream.buffer.size.bytes=65536, supervisor.worker.heartbeats.max.timeout.secs=600, supervisor.worker.timeout.secs=30, topology.worker.receiver.thread.count=1, logviewer.max.sum.worker.logs.size.mb=4096, topology.executor.overflow.limit=0, topology.batch.flush.interval.millis=1, nimbus.file.copy.expiration.secs=600, pacemaker.port=6699, topology.worker.logwriter.childopts=-Xmx64m, drpc.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, nimbus.topology.blobstore.deletion.delay.ms=300000, storm.blobstore.acl.validation.enabled=false, ui.filter.params=null, topology.workers=1, blacklist.scheduler.tolerance.time.secs=300, storm.supervisor.medium.memory.threshold.mb=1536, topology.environment=null, drpc.invocations.port=3773, storm.metricstore.rocksdb.create_if_missing=true, nimbus.cleanup.inbox.freq.secs=600, client.blobstore.class=org.apache.storm.blobstore.NimbusBlobStore, topology.fall.back.on.java.serialization=true, storm.nimbus.retry.intervalceiling.millis=60000, storm.nimbus.zookeeper.acls.fixup=true, logviewer.appender.name=A1, ui.users=null, pacemaker.childopts=-Xmx1024m, storm.messaging.netty.server_worker_threads=1, scheduler.display.resource=false, ui.actions.enabled=true, storm.thrift.socket.timeout.ms=600000, storm.topology.classpath.beginning.enabled=false, storm.zookeeper.connection.timeout=15000, topology.tick.tuple.freq.secs=null, nimbus.inbox.jar.expiration.secs=3600, topology.debug=false, storm.zookeeper.retry.interval=1000, storm.messaging.netty.buffer.high.watermark=16777216, storm.blobstore.dependency.jar.upload.chunk.size.bytes=1048576, worker.log.level.reset.poll.secs=30, storm.exhibitor.poll.uripath=/exhibitor/v1/cluster/list, storm.zookeeper.retry.times=5, nimbus.code.sync.freq.secs=120, topology.component.resources.offheap.memory.mb=0.0, topology.spout.wait.progressive.level1.count=0, topology.state.checkpoint.interval.ms=1000, topology.priority=29, supervisor.localizer.cleanup.interval.ms=30000, nimbus.host=127.0.0.1, storm.health.check.dir=healthchecks, supervisor.cpu.capacity=400.0, topology.backpressure.wait.progressive.level3.sleep.millis=1, storm.cgroup.resources=[cpu, memory], storm.worker.min.cpu.pcore.percent=0.0, topology.classpath=null, storm.nimbus.zookeeper.acls.check=true, num.stat.buckets=20, topology.spout.wait.progressive.level3.sleep.millis=1, supervisor.localizer.cache.target.size.mb=10240, topology.worker.childopts=null, drpc.https.port=-1, topology.bolt.wait.park.microsec=100, topology.max.replication.wait.time.sec=60, storm.cgroup.cgexec.cmd=/bin/cgexec, topology.acker.executors=null, topology.bolt.wait.progressive.level3.sleep.millis=1, supervisor.worker.start.timeout.secs=120, supervisor.worker.shutdown.sleep.secs=3, logviewer.max.per.worker.logs.size.mb=2048, topology.trident.batch.emit.interval.millis=500, task.heartbeat.frequency.secs=3, supervisor.enable=true, supervisor.thrift.max_buffer_size=1048576, supervisor.blobstore.class=org.apache.storm.blobstore.NimbusBlobStore, topology.producer.batch.size=1, drpc.worker.threads=64, resource.aware.scheduler.priority.strategy=org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy, blacklist.scheduler.reporter=org.apache.storm.scheduler.blacklist.reporters.LogReporter, storm.messaging.netty.socket.backlog=500, storm.cgroup.inherit.cpuset.configs=false, nimbus.queue.size=100000, drpc.queue.size=128, ui.disable.spout.lag.monitoring=true, topology.eventlogger.executors=0, pacemaker.base.threads=10, nimbus.childopts=-Xmx1024m, topology.spout.recvq.skips=3, storm.resource.isolation.plugin.enable=false, nimbus.monitor.freq.secs=10, storm.supervisor.memory.limit.tolerance.margin.mb=128.0, storm.disable.symlinks=false, topology.localityaware.lower.bound=0.2, transactional.zookeeper.servers=null, nimbus.task.timeout.secs=30, logs.users=null, pacemaker.thrift.message.size.max=10485760, ui.host=0.0.0.0, supervisor.thrift.port=6628, topology.bolt.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, pacemaker.thread.timeout=10, storm.meta.serialization.delegate=org.apache.storm.serialization.GzipThriftSerializationDelegate, dev.zookeeper.path=/tmp/dev-storm-zookeeper, topology.skip.missing.kryo.registrations=false, drpc.invocations.threads=64, storm.zookeeper.session.timeout=20000, storm.metricstore.rocksdb.metadata_string_cache_capacity=4000, storm.workers.artifacts.dir=workers-artifacts, topology.component.resources.onheap.memory.mb=128.0, storm.log4j2.conf.dir=log4j2, storm.cluster.mode=distributed, ui.childopts=-Xmx768m, task.refresh.poll.secs=10, supervisor.childopts=-Xmx256m, task.credentials.poll.secs=30, storm.health.check.timeout.ms=5000, storm.blobstore.replication.factor=3, worker.profiler.command=flight.bash, storm.messaging.netty.buffer.low.watermark=8388608} 2020-05-25 14:51:41.877 o.a.s.z.LeaderElectorImp main [INFO] Queued up for leader lock. 2020-05-25 14:51:41.907 o.a.s.n.NimbusInfo main-EventThread [INFO] Nimbus figures out its name to 7480-GQY29H2.smarshcorp.com 2020-05-25 14:51:41.929 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] Sync remote assignments and id-info to local 2020-05-25 14:51:41.963 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormcode.ser/7480-GQY29H2.smarshcorp.com:6627-1 2020-05-25 14:51:41.999 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormcode.ser/7480-GQY29H2.smarshcorp.com:6627-1 2020-05-25 14:51:42.015 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormconf.ser/7480-GQY29H2.smarshcorp.com:6627-1 2020-05-25 14:51:42.035 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormjar.jar/7480-GQY29H2.smarshcorp.com:6627-1 2020-05-25 14:51:42.052 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormconf.ser/7480-GQY29H2.smarshcorp.com:6627-1 2020-05-25 14:51:42.081 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormjar.jar/7480-GQY29H2.smarshcorp.com:6627-1 2020-05-25 14:51:42.104 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] active-topology-blobs [Stock-Topology-1-1589133962,word-topology-1-1589738489] local-topology-blobs [word-topology-1-1589738489-stormcode.ser,Stock-Topology-1-1589133962-stormcode.ser,word-topology-1-1589738489-stormconf.ser,Stock-Topology-1-1589133962-stormjar.jar,Stock-Topology-1-1589133962-stormconf.ser,word-topology-1-1589738489-stormjar.jar] diff-topology-blobs [] 2020-05-25 14:51:42.239 o.a.s.d.m.ClientMetricsUtils main [INFO] Using statistics reporter plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter 2020-05-25 14:51:42.297 o.a.s.d.m.r.JmxPreparableReporter main [INFO] Preparing... 2020-05-25 14:51:42.322 o.a.s.m.StormMetricsRegistry main [INFO] Started statistics report plugin... 2020-05-25 14:51:42.327 o.a.s.d.n.Nimbus main [INFO] Starting nimbus server for storm version '2.1.0' 2020-05-25 14:51:42.408 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] active-topology-dependencies [] local-blobs [word-topology-1-1589738489-stormcode.ser,Stock-Topology-1-1589133962-stormcode.ser,word-topology-1-1589738489-stormconf.ser,Stock-Topology-1-1589133962-stormjar.jar,Stock-Topology-1-1589133962-stormconf.ser,word-topology-1-1589738489-stormjar.jar] diff-topology-dependencies [] 2020-05-25 14:51:42.409 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] Accepting leadership, all active topologies and corresponding dependencies found locally. 2020-05-25 14:51:42.409 o.a.s.z.LeaderListenerCallbackFactory main-EventThread [INFO] 7480-GQY29H2.smarshcorp.com gained leadership. 2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive 2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive 2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive 2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive 2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive 2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive 2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive 2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:51:42.619 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:51:45.096 o.a.s.d.n.Nimbus timer [INFO] TRANSITION: word-topology-1-1589738489 GAIN_LEADERSHIP null false 2020-05-25 14:51:45.098 o.a.s.d.n.Nimbus timer [INFO] TRANSITION: Stock-Topology-1-1589133962 GAIN_LEADERSHIP null false 2020-05-25 14:51:52.682 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive 2020-05-25 14:51:52.682 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive 2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive 2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive 2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive 2020-05-25 14:51:52.684 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive 2020-05-25 14:51:52.684 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive 2020-05-25 14:51:52.685 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:51:52.686 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:51:52.686 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:51:52.687 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive 2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive 2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive 2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive 2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive 2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive 2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive 2020-05-25 14:52:02.736 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive 2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive 2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive 2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive 2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive 2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive 2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive 2020-05-25 14:52:12.774 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:52:12.774 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:52:12.775 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:52:12.775 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology 2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive 2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive 2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive 2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive 2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive 2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive 2020-05-25 14:52:22.810 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive 2020-05-25 14:52:22.811 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology 2020-05-25 14:52:22.811 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology Here is my storm.yml: storm.zookeeper.servers: - "127.0.0.1" nimbus.host: "127.0.0.1" ui.port: 8081 storm.local.dir: "/Users/anshita.singh/storm/datadir/storm" supervisor.slot.ports: -6700 -6701 -6702 -6703 # storm.zookeeper.servers: # - "server1" # - "server2" # # nimbus.seeds: ["host1", "host2", "host3"] # # # ##### These may optionally be filled in: # ## List of custom serializations # topology.kryo.register: # - org.mycompany.MyType # - org.mycompany.MyType2: org.mycompany.MyType2Serializer # ## List of custom kryo decorators # topology.kryo.decorators: # - org.mycompany.MyDecorator # ## Locations of the drpc servers # drpc.servers: # - "server1" # - "server2" ## Metrics Consumers ## max.retain.metric.tuples ## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0. ## whitelist / blacklist ## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'. ## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them. ## - you can specify multiple whitelist / blacklist with regular expression ## expandMapType: expand metric with map type as value to multiple metrics ## - set to true when you would like to apply filter to expanded metrics ## - default value is false which is backward compatible value ## metricNameSeparator: separator between origin metric name and key of entry from map ## - only effective when expandMapType is set to true ## - default value is "." # topology.metrics.consumer.register: # - class: "org.apache.storm.metric.LoggingMetricsConsumer" # max.retain.metric.tuples: 100 # parallelism.hint: 1 # - class: "org.mycompany.MyMetricsConsumer" # max.retain.metric.tuples: 100 # whitelist: # - "execute.*" # - "^__complete-latency$" # parallelism.hint: 1 # argument: # - endpoint: "metrics-collector.mycompany.org" # expandMapType: true # metricNameSeparator: "." ## Cluster Metrics Consumers # storm.cluster.metrics.consumer.register: # - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer" # - class: "org.mycompany.MyMetricsConsumer" # argument: # - endpoint: "metrics-collector.mycompany.org" # # storm.cluster.metrics.consumer.publish.interval.secs: 60 # Event Logger # topology.event.logger.register: # - class: "org.apache.storm.metric.FileBasedEventLogger" # - class: "org.mycompany.MyEventLogger" # arguments: # endpoint: "event-logger.mycompany.org" # Metrics v2 configuration (optional) #storm.metrics.reporters: # # Graphite Reporter # - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter" # daemons: # - "supervisor" # - "nimbus" # - "worker" # report.period: 60 # report.period.units: "SECONDS" # graphite.host: "localhost" # graphite.port: 2003 # # # Console Reporter # - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter" # daemons: # - "worker" # report.period: 10 # report.period.units: "SECONDS" # filter: # class: "org.apache.storm.metrics2.filters.RegexFilter" # expression: ".*my_component.*emitted.*" Can anyone tell me what configuration I have missed, if any? And please let me know if any else information is needed to debug this? My environment: Apache-storm-2.1.0 Apache-zookeeper-3.6.1 Solution: Run below command: storm admin remove_corrupt_topologies
Looks like there was some corrupted topologies were there. When I ran this command it fix this issue: storm admin remove_corrupt_topologies
Storm 1.2.2 and Kafka Version 2.x
I'm testing a case using Storm 1.2.2 and Kafka 2.x as my Spout. So i created a LocalCluster just for test purposes. TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("kafka_spout", new KafkaSpout<>(KafkaSpoutConfig.builder("MYKAFKAIP:9092", "storm-test-dpi").build()), 1); builder.setBolt("bolt", new LoggerBolt()).shuffleGrouping("kafka_spout"); LocalCluster localCluster = new LocalCluster(); localCluster.submitTopology("kafkaBoltTest", new Config(), builder.createTopology()); Utils.sleep(10000); After initialize this app i got the following: 9293 [Thread-20-kafka_spout-executor[3 3]] INFO o.a.k.c.u.AppInfoParser - Kafka version : 0.10.1.0 9293 [Thread-20-kafka_spout-executor[3 3]] INFO o.a.k.c.u.AppInfoParser - Kafka commitId : 3402a74efb23d1d4 And after a lot of error: 9664 [Thread-20-kafka_spout-executor[3 3]] INFO o.a.s.k.s.KafkaSpout - Initialization complete 9703 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 9714 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 9742 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 9756 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 9767 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 9781 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 9806 [Thread-20-kafka_spout-executor[3 3]] WARN o.a.k.c.c.i.Fetcher - Unknown error fetching data for topic-partition storm-test-dpi-0 I think this problem is because of Kafka Version, as you can see the log is showing version "0.10.1.0" but my Kafka version is "2.x". This is my pom.xml: <dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-core</artifactId> <version>${version.storm}</version> </dependency> <dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-kafka-client</artifactId> <version>${version.storm}</version> </dependency> Where ${version.storm} is 1.2.2
You are supposed to also declare the version of kafka-clients you are using. The storm-kafka-client POM sets the kafka-clients scope to provided. This means kafka-clients won't be included when you build. We do this so you can easily upgrade. The reason it's even running for you is because you are using LocalCluster in some test code, where provided dependencies are present. Add this to your POM, and it should work: <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>your-kafka-version-here</version> </dependency>
java.lang.NullPointerException: at utilities2.ExcelUtility.findCells(ExcelUtility.java:70)
I am trying to run my test cases from command Line using maven command. Below is the error message I am getting. I was able to run the test case when I right click on my XML file and run it with TestNG suites without a problem. The problem I have is that I can't execute the test cases that are inside my pom.xml. This problem only occurs whenever I am using excel for data-driven.I am suspecting my poi might not be compatible with the maven version or the chrome gecko might not be compatible with the maven version. I am just guessing and that is why I need your help. C:\ Blockquote mvn test -PClientAlert [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building cehproject 0.0.1-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) # cehproject --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory C:\Users\akinrins\workspace\cehprojec t\src\main\resources [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) # cehproject --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) # ce hproject --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory C:\Users\akinrins\workspace\cehprojec t\src\test\resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) # cehproj ect --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (default-test) # cehproject --- [INFO] Surefire report directory: C:\Users\akinrins\workspace\cehproject\target\ surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- Running TestSuite Starting ChromeDriver 2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41) on port 8611 Only local connections are allowed. Jan 16, 2018 2:37:04 PM org.openqa.selenium.remote.ProtocolHandshake createSessi on INFO: Attempting bi-dialect session, assuming Postel's Law holds true on the rem ote end log4j:WARN No appenders could be found for logger (org.apache.http.client.protoc ol.RequestAddCookies). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more in fo. Jan 16, 2018 2:37:06 PM org.openqa.selenium.remote.ProtocolHandshake createSessi on INFO: Detected dialect: OSS java.lang.NullPointerException at utilities2.ExcelUtility.findCells(ExcelUtility.java:70) at utilities2.ExcelUtility.getTestData(ExcelUtility.java:40) at AlertTesting.ClientAlertTest.dataProvider(ClientAlertTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocat ionHelper.java:108) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocat ionHelper.java:55) at org.testng.internal.MethodInvocationHelper.invokeMethodNoCheckedExcep tion(MethodInvocationHelper.java:45) at org.testng.internal.MethodInvocationHelper.invokeDataProvider(MethodI nvocationHelper.java:115) at org.testng.internal.Parameters.handleParameters(Parameters.java:509) at org.testng.internal.Invoker.handleParameters(Invoker.java:1308) at org.testng.internal.Invoker.createParameters(Invoker.java:1036) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1126) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWork er.java:126) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109) at org.testng.TestRunner.privateRun(TestRunner.java:744) at org.testng.TestRunner.run(TestRunner.java:602) at org.testng.SuiteRunner.runTest(SuiteRunner.java:380) at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375) at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340) at org.testng.SuiteRunner.run(SuiteRunner.java:289) at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301) at org.testng.TestNG.runSuitesLocally(TestNG.java:1226) at org.testng.TestNG.runSuites(TestNG.java:1144) at org.testng.TestNG.run(TestNG.java:1115) at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.ja va:295) at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXml TestSuite.java:84) at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider .java:90) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla ssLoader(ForkedBooter.java:203) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork edBooter.java:155) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java: 103) [Utils] [ERROR] [Error] org.testng.TestNGException: Data Provider public java.lang.Object[][] AlertTesting.ClientAlertTest.dataProvi der() must return either Object[][] or Iterator<Object>[], not class [[Lja va.lang.Object; at org.testng.internal.MethodInvocationHelper.invokeDataProvider(MethodI nvocationHelper.java:137) at org.testng.internal.Parameters.handleParameters(Parameters.java:509) at org.testng.internal.Invoker.handleParameters(Invoker.java:1308) at org.testng.internal.Invoker.createParameters(Invoker.java:1036) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1126) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWork er.java:126) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109) at org.testng.TestRunner.privateRun(TestRunner.java:744) at org.testng.TestRunner.run(TestRunner.java:602) at org.testng.SuiteRunner.runTest(SuiteRunner.java:380) at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375) at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340) at org.testng.SuiteRunner.run(SuiteRunner.java:289) at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301) at org.testng.TestNG.runSuitesLocally(TestNG.java:1226) at org.testng.TestNG.runSuites(TestNG.java:1144) at org.testng.TestNG.run(TestNG.java:1115) at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.ja va:295) at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXml TestSuite.java:84) at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider .java:90) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla ssLoader(ForkedBooter.java:203) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork edBooter.java:155) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java: 103) Tests run: 6, Failures: 2, Errors: 0, Skipped: 4, Time elapsed: 9.846 sec <<< FA ILURE! - in TestSuite ClientAlertOne(AlertTesting.ClientAlertTest) Time elapsed: 9.712 sec <<< FAILU RE! java.lang.NoSuchMethodError: org.apache.poi.util.POILogger.log(ILjava/lang/Objec t;)V at org.apache.poi.openxml4j.opc.PackageRelationshipCollection.parseRelat ionshipsPart(PackageRelationshipCollection.java:304) at org.apache.poi.openxml4j.opc.PackageRelationshipCollection.<init>(Pac kageRelationshipCollection.java:156) at org.apache.poi.openxml4j.opc.PackageRelationshipCollection.<init>(Pac kageRelationshipCollection.java:124) at org.apache.poi.openxml4j.opc.PackagePart.loadRelationships(PackagePar t.java:559) at org.apache.poi.openxml4j.opc.PackagePart.<init>(PackagePart.java:112) at org.apache.poi.openxml4j.opc.PackagePart.<init>(PackagePart.java:83) at org.apache.poi.openxml4j.opc.PackagePart.<init>(PackagePart.java:128) at org.apache.poi.openxml4j.opc.ZipPackagePart.<init>(ZipPackagePart.jav a:78) at org.apache.poi.openxml4j.opc.ZipPackage.getPartsImpl(ZipPackage.java: 218) at org.apache.poi.openxml4j.opc.OPCPackage.getParts(OPCPackage.java:662) at org.apache.poi.openxml4j.opc.OPCPackage.open(OPCPackage.java:269) at org.apache.poi.util.PackageHelper.open(PackageHelper.java:39) at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(XSSFWorkbook.java:2 04) at utilities2.ExcelUtility.setExcelFile(ExcelUtility.java:27) at AlertTesting.ClientAlertTest.ClientAlertOne(ClientAlertTest.java:63) TC_ClientAlertTest(AlertTesting.ClientAlertTest) Time elapsed: 9.743 sec <<< F AILURE! org.testng.TestNGException: Data Provider public java.lang.Object[][] AlertTesting.ClientAlertTest.dataProvi der() must return either Object[][] or Iterator<Object>[], not class [[Ljava.lan g.Object; at org.testng.internal.MethodInvocationHelper.invokeDataProvider(MethodI nvocationHelper.java:137) at org.testng.internal.Parameters.handleParameters(Parameters.java:509) at org.testng.internal.Invoker.handleParameters(Invoker.java:1308) at org.testng.internal.Invoker.createParameters(Invoker.java:1036) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1126) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWork er.java:126) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109) at org.testng.TestRunner.privateRun(TestRunner.java:744) at org.testng.TestRunner.run(TestRunner.java:602) at org.testng.SuiteRunner.runTest(SuiteRunner.java:380) at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375) at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340) at org.testng.SuiteRunner.run(SuiteRunner.java:289) at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301) at org.testng.TestNG.runSuitesLocally(TestNG.java:1226) at org.testng.TestNG.runSuites(TestNG.java:1144) at org.testng.TestNG.run(TestNG.java:1115) at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.ja va:295) at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXml TestSuite.java:84) at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider .java:90) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla ssLoader(ForkedBooter.java:203) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork edBooter.java:155) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java: 103) Results : Failed tests: ClientAlertTest.ClientAlertOne:63 » NoSuchMethod org.apache.poi.util.POILogger ... ClientAlertTest.TC_ClientAlertTest » TestNG Data Provider public java.lang.Ob... Tests run: 6, Failures: 2, Errors: 0, Skipped: 4 [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 13.268 s [INFO] Finished at: 2018-01-16T14:37:13-05:00 [INFO] Final Memory: 13M/309M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2. 18.1:test (default-test) on project cehproject: There are test failures. [ERROR] [ERROR] Please refer to C:\Users\akinrins\workspace\cehproject\target\surefire-r eports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit ch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. `enter code here`ERROR] [ERROR] For more information about the errors and possible solutions, please rea d the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureExc eption
spray-client throwing "Too many open files" exception when giving more concurrent requests
I have a spray http client which is running in a server X, which will make connections to server Y. Server Y is kind of slow(will take 3+ sec for a request) This is my http client code invocation: def get() { val result = for { response <- IO(Http).ask(HttpRequest(GET,Uri(getUri(msg)),headers)).mapTo[HttpResponse] } yield response result onComplete { case Success(res) => sendSuccess(res) case Failure(error) => sendError(res) } } These are the configurations I have in application.conf: spray.can { client { request-timeout = 30s response-chunk-aggregation-limit = 0 max-connections = 50 warn-on-illegal-headers = off } host-connector { max-connections = 128 idle-timeout = 3s } } Now I tried to abuse the server X with large number of concurrent requests(using ab with n=1000 and c=100). Till 900 requests it went fine. After that the server threw lot of exceptions and I couldn't hit the server after that. These are the exceptions: [info] [ERROR] [03/28/2015 17:33:13.276] [squbs-akka.actor.default-dispatcher-6] [akka://squbs/system/IO-TCP/selectors/$a/0] Accept error: could not accept new connection [info] java.io.IOException: Too many open files [info] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [info] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [info] at akka.io.TcpListener.acceptAllPending(TcpListener.scala:103) and on further hitting the same server, it threw the below exception: [info] [ERROR] [03/28/2015 17:53:16.735] [hcp-client-akka.actor.default-dispatcher-6] [akka://hcp-client/system/IO-TCP/selectors] null [info] akka.actor.ActorInitializationException: exception during creation [info] at akka.actor.ActorInitializationException$.apply(Actor.scala:164) [info] at akka.actor.ActorCell.create(ActorCell.scala:596) [info] Caused by: java.lang.reflect.InvocationTargetException [info] at sun.reflect.GeneratedConstructorAccessor59.newInstance(Unknown Source) [info] Caused by: java.io.IOException: Too many open files [info] at sun.nio.ch.IOUtil.makePipe(Native Method) I was previously using apache http client(which was synchronous) which was able to handle 10000+ requests with concurrency of 100. I'm not sure I'm missing something. Any help would be appreciated.
The problem is that every time you call get() method it creates a new actor that creates at least one connection to the remote server. Furthermore you never shut down that actor, so each such connection leaves until it times out. You only need a single such actor to manage all your HTTP requests, thus to fix it take IO(Http) out of the get() method and call it only once. Reuse that returned ActorRef for all your requests to that server. Shut it down on application shutdown. For example: val system: ActorSystem = ... val io = IO(Http)(system) io ! Http.Bind( ... def get(): Unit = { ... io.ask ... // or io.tell ... }
sql-maven-plugin throws SQL syntax error for H2 stored procedure definition
I try to install a stored procedure in an h2 database (v. 1.3.170) using the sql-maven-plugin version 1.5. The offending SQL statement looks like this: CREATE ALIAS GET_DATA AS $$ ResultSet getData(Connection conn, String id) { return null; } $$; This has been adapted from User-Defined Functions and Stored Procedures from the H2 website. The maven error I get is this: [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building dummy 0.5.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) # X4PAAsql --- [INFO] Deleting D:\Data\Git\x4paa\SQL\target [INFO] [INFO] --- sql-maven-plugin:1.5:execute (createTables) # X4PAAsql --- [INFO] Executing file: C:\Users\THOMAS~1\AppData\Local\Temp\prepareDB.331774647sql [INFO] Executing file: C:\Users\THOMAS~1\AppData\Local\Temp\storedProc.2069356353sql [ERROR] Failed to execute: CREATE ALIAS GET_DATA AS $$ #CODE static ResultSet getData(Connection conn, String id) { return null [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.717s [INFO] Finished at: Wed Aug 07 11:16:40 CEST 2013 [INFO] Final Memory: 4M/122M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:sql-maven-plugin:1.5:execute (createTables) on project X4PAAsql: Syntax Fehler in SQL Befehl " CREATE ALIAS GET_DATA AS [*]$$ [ERROR] #CODE [ERROR] static ResultSet getData(Connection conn, String id) { [ERROR] return null" [ERROR] Syntax error in SQL statement " CREATE ALIAS GET_DATA AS [*]$$ [ERROR] #CODE [ERROR] static ResultSet getData(Connection conn, String id) { [ERROR] return null" [42000-170] [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException I use the following pom contents to configure the sql-maven-plugin. <properties> <h2db.version>1.3.170</h2db.version> <sql-maven-plugin.version>1.5</sql-maven-plugin.version> </properties> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>sql-maven-plugin</artifactId> <version>${sql-maven-plugin.version}</version> <dependencies> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>${h2db.version}</version> </dependency> </dependencies> <configuration> <driver>org.h2.Driver</driver> <url>jdbc:h2:SQL/target/H2DB/x4</url> <username>sa</username> <password>sa</password> </configuration> <executions> <execution> <id>createTables</id> <phase>compile</phase> <goals> <goal>execute</goal> </goals> <configuration> <forceMojoExecution>true</forceMojoExecution> <srcFiles> <srcFile>scripts/h2/prepareDB.sql</srcFile> <srcFile>scripts/h2/storedProc.sql</srcFile> </srcFiles> </configuration> </execution> </executions> </plugin> Is there a way to work around the syntactical problem with the '$$'? Update: I tried to run the query in SQuirrel and get the same error. So the problem is probably not related to the sql-maven-plugin but to the way the jdbc driver is used by the plugin or SQuirrel?
The sql-maven-plugin splits the SQL statement at the ";". The original statement was CREATE ALIAS GET_DATA AS $$ ResultSet getData(Connection conn, String id) { return null; } $$; however, the exception message of the database only shows the first part, until the ";": CREATE ALIAS GET_DATA AS [*]$$ #CODE static ResultSet getData(Connection conn, String id) { return null (The marker [*] just before the $$ is the position where the parsing fails, because the parser doesn't see the end token $$.) The "good" solution would be to change the sql-maven-plugin to support quoted data, but I guess that will not be easy. As a workaround, you could try changing the statement to: CREATE ALIAS GET_DATA AS $$ ResultSet getData(Connection conn, String id) { return null; } $$;