The storm ui throws a red warning message, how can I remove the warning? - apache-storm

The following are related screenshots of storm ui:
Only the following options are configured in the storm.yaml configuration file:
storm.zookeeper.servers
nimbus.seeds
storm.zookeeper.port
storm.local.dir
storm.scheduler
scheduler.display.servers
others use the default.yaml configuration files. Why is this warning thrown on storm ui???
I've tried modifying the number of workers, memory size, etc., but the warning persists.

Related

Beam / DataFlow unexpected error ProtocolMessageEnum not implemented when using DataFlowRunner

When running my Beam pipeline locally it all works as expected but when trying to run it on the DataflowRunner I suddenly get the error below. Honestly I don't even know where to start evaluating this because the DataflowRunner seems to be a black box.
Jan 14, 2019 11:26:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 165 files. Enable logging at DEBUG level to see which files will be staged.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class org.apache.beam.model.pipeline.v1.RunnerApi$StandardPTransforms$Primitives does not implement the requested interface com.google.protobuf.ProtocolMessageEnum
at org.apache.beam.runners.core.construction.BeamUrns.getUrn(BeamUrns.java:27)
at org.apache.beam.runners.core.construction.PTransformTranslation.<clinit>(PTransformTranslation.java:58)
at org.apache.beam.runners.core.construction.UnconsumedReads$1.visitValue(UnconsumedReads.java:49)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:666)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.runners.core.construction.UnconsumedReads.ensureAllReadsConsumed(UnconsumedReads.java:40)
at org.apache.beam.runners.dataflow.DataflowRunner.replaceTransforms(DataflowRunner.java:868)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:660)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:173)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
at (my code: pipe.run().waitUntilFinish();)
check the versions of beam etc and upgrade your dependencies where possible.
I had the same error and after seeing you get this error, I thought it must be a dependency conflict as it didn't exist before.
I'm using scio to deploy to dataflow and just referenced what they're using. https://github.com/spotify/scio/blob/v0.7.1/build.sbt
I updated guava and protobuf also.
I know you're using java, but try updating beam to 2.9.0 and maybe guava, protobuf...

Spring-xd strange too many open files error

I upgraded from spring-xd 1.2.1 to 1.3.0, and have both under /opt on my system. After starting xd in single node (but configured to use Zookeeper), I tried to create another stream (e.g. "time | log"), and spring-xd throws the following exception:
java.io.FileNotFoundException: /opt/spring-xd-1.2.1.RELEASE/xd/config/modules/modules.yml (Too many open files)
I changed ulimit -n 60000, but it didn't solve the problem. The strange thing is why it still points to spring-xd-1.2.1.RELEASE? I have started both xd-singlenode and xd-shell under /opt/spring-xd-1.3.1.RELEASE
EDIT: add xd-singlenode running process output just to show it's pointing to 1.3.1:
/usr/java/default/bin/java -Dspring.application.name=admin
-Dlogging.config=file:/opt/spring-xd-1.3.0.RELEASE/xd/config//
/xd-singlenode-logback.groovy -Dxd.home=/opt/spring-xd-1.3.0.RELEASE/xd
-Dspring.config.location=file:/opt/spring-xd-1.3.0.RELEASE/xd/config//
-Dxd.config.home=file:/opt
/spring-xd-1.3.0.RELEASE/xd/config//
-Dspring.config.name=servers,application
-Dxd.module.config.location=file:/opt/spring-xd-1.3.0.RELEASE/xd/config//modules/
-Dxd.module.config.name=modules -classpath
/opt/spring-xd-1.3.0.RELEASE/xd/modules/processor/scripts:/opt/spring-xd
-1.3.0.RELEASE/xd/config:/opt/spring-xd-1.3.0.RELEASE/xd/lib/activation-
...
have you updated your environment variables? specifically XD_CONFIG_LOCATION based on the error shown above.

How can I disable console messages when running maven commands?

I'm in the process of executing Maven commands to run tests in the console (MacOSX). Recently, development efforts have produced extraneous messages in the console (info, debug, warning, etc.) I'd like to know how to remove messages like this:
INFO c.c.m.s.c.p.ApplicationProperties - Loading application properties from: app-config/shiro.properties
I've used this code to remove messages from the dbunit tests:
ch.qos.logback.classic.Logger Logger = (ch.qos.logback.classic.Logger)LoggerFactory.getLogger("org.dbunit");
Logger.setLevel(Level. ERROR);
However, I'm unsure how to disable these additional (often verbose and irritating) messages from showing up on the console so that I can see the output more easily. Additional messages appear as above and these:
DEBUG c.c.m.s.c.f.dao.AbstractDBDAO - Adding filters to the Main Search query.
WARN c.c.m.s.c.p.JNDIConfigurationProperties - Unable to find JNDI value for name: xxxxx
INFO c.c.m.a.t.d.DatabaseTestFixture - * executing sql: xxxxx
The successful answer was:
SOLUTION: Solution to issue IS adding a 'logback-test.xml' file to the root of my test folder. I used the default contents (as instructed by the documentation - thanks #Charlie). Once file exists there, FIXED!

Setting appConcurrentRequestLimit in Windows 2008

I read from SO (HTTP Error 503.2 - Service Unavailable. The serverRuntime#appConcurrentRequestLimit setting is being exceeded) and MSDN (http://technet.microsoft.com/en-us/library/dd425294(v=office.13).aspx) that I need to set AppConCurrentRequestLimit to a bigger number if the site is showing appconcurrentlimit exceeded error (which mine is like that).
However upon executing the command provided by MSDN, I got error
c:\Windows\System32\inetsrv>appcmd.exe set config /section:serverRuntime /appCon
currentRequestLimit:100000
ERROR ( message:Unknown attribute "appConcurrentRequestLimit". Replace with -?
for help. )
I try to search in google but seems no one having the same issue as mine.
I try to manually input in ASPNET.Config in XML but whatever I change the site does not seem to restart, even I put random error text in the config, my site still does not show error, is ASPNET.Config configuration being used, why there is no error even the configuration is intentionally made error?
It fails if you have additional sections such as ftpsection.
In order to edit the serverRuntime of the system.webserver section you should run:
cd %windir%\system32\inetsrv
appcmd.exe set config /section:system.webserver/serverRuntime /appConcurrentRequestLimit:100000

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources