HBase Shell Logging - shell

When using the HBase shell, I'm getting a great deal of logging, including INFO and DEBUG messages. While this is interesting in terms of learning HBase internals, it is quite verbose and can bury the output.
I've tried changing the logging levels in a number of different ways, including as described here, and while some of the warnings do disappear, I continue to get a large number of INFO and DEBUG messages, i.e.:
18:50:49.500 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
18:50:49.516 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=ip-10-234-8-223.ec2.internal
18:50:49.517 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.7.0_65
18:50:49.517 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
Besides $HBASE_HOME/conf/log4j.properties, I've tried running the shell outside the $HBASE_HOME/bin/hbase shell-script. Even setting log4j.rootLogger=OFF doesn't seem to help. Attempting to use Logger.getRootLogger().setLevel(Level.WARN);, per the above link, did not work either.
Are these messages being emitted by a JRuby logger? Are they returned as text to the shell by other components?

Edit and adjust the log levels in "log4j.properties" file which is in "hbase/conf/" folder.

Related

Tomcat IOException: Duplicate accept detected. This is a known OS bug. Please consider reporting that you are affected:

I just migrated to a new spring boot version lately, namely v2.6.2 and now I find the following exception of some of the services logs.
23:27:45.148 [main] INFO o.s.b.w.e.tomcat.TomcatWebServer - Tomcat started on port(s): 8080 (https) with context path ''
23:27:47.046 [main] INFO o.a.c.i.engine.AbstractCamelContext - Routes startup (total:1 started:1)
23:27:47.048 [main] INFO o.a.c.i.engine.AbstractCamelContext - Started route1 (jms://queue:inp.contentextractor.entry)
23:27:47.049 [main] INFO o.a.c.i.engine.AbstractCamelContext - Apache Camel 3.14.0 (camel-1) started in 2s228ms (build:622ms init:1s32ms start:574ms)
23:27:47.061 [main] INFO eu.hermes.esb.cloud.Application - Started Application in 25.305 seconds (JVM running for 27.737)
23:28:20.356 [https-jsse-nio-8080-exec-4] INFO o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring DispatcherServlet 'dispatcherServlet'
23:28:20.357 [https-jsse-nio-8080-exec-4] INFO o.s.web.servlet.DispatcherServlet - Initializing Servlet 'dispatcherServlet'
23:28:20.359 [https-jsse-nio-8080-exec-4] INFO o.s.web.servlet.DispatcherServlet - Completed initialization in 2 ms
00:05:19.986 [https-jsse-nio-8080-Acceptor] ERROR org.apache.tomcat.util.net.Acceptor - Socket accept failed
java.io.IOException: Duplicate accept detected. This is a known OS bug. Please consider reporting that you are affected: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1924298
at org.apache.tomcat.util.net.NioEndpoint.serverSocketAccept(NioEndpoint.java:545)
at org.apache.tomcat.util.net.NioEndpoint.serverSocketAccept(NioEndpoint.java:78)
at org.apache.tomcat.util.net.Acceptor.run(Acceptor.java:129)
at java.base/java.lang.Thread.run(Thread.java:834)
05:20:49.981 [https-jsse-nio-8080-Acceptor] ERROR org.apache.tomcat.util.net.Acceptor - Socket accept failed
java.io.IOException: Duplicate accept detected. This is a known OS bug. Please consider reporting that you are affected: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1924298
at org.apache.tomcat.util.net.NioEndpoint.serverSocketAccept(NioEndpoint.java:545)
at org.apache.tomcat.util.net.NioEndpoint.serverSocketAccept(NioEndpoint.java:78)
at org.apache.tomcat.util.net.Acceptor.run(Acceptor.java:129)
at java.base/java.lang.Thread.run(Thread.java:834)
What does it mean and how harmful is it ?
Following the link provided in the bug (https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1924298), and looking into comments made by community members, it seems that there is something in tomcat usage that uncovered a hidden issue in linux kernal 5.10-rc4 and was most likely fixed in 5.10-rc6.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1924298/comments/15
This issue was reported across centOS, amazon linux, Arch linux and even Window 2019 - when using a lower linux kernel than 5.10.
The conclusion from this discussion seems to be the suggestion to update the OS itself to a stable version that uses linux kernel >= 5.10

While installing the Cassandra getting the error

Trying to installing the Cassandra in my local system getting the below error. Could you please solve or help on the error.
INFO [main] 2021-04-22 19:23:16,662 CassandraDaemon.java:507 - JVM Arguments: [-ea, -javaagent:C:\Cassandra\apache-cassandra-3.11.10\lib\jamm-0.3.0.jar, -Xms2G, -Xmx2G, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Dlogback.configurationFile=logback.xml, -Djava.library.path=C:\Cassandra\apache-cassandra-3.11.10\lib\sigar-bin, -Dcassandra.jmx.local.port=7199, -Dcassandra, -Dcassandra-foreground=yes, -Dcassandra.logdir=C:\Cassandra\apache-cassandra-3.11.10\logs, -Dcassandra.storagedir=C:\Cassandra\apache-cassandra-3.11.10\data]
WARN [main] 2021-04-22 19:23:16,682 StartupChecks.java:169 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
WARN [main] 2021-04-22 19:23:16,688 StartupChecks.java:220 - The JVM is not configured to stop on OutOfMemoryError which can cause data corruption. Use one of the following JVM options to configure the behavior on OutOfMemoryError: -XX:+ExitOnOutOfMemoryError, -XX:+CrashOnOutOfMemoryError, or -XX:OnOutOfMemoryError="<cmd args>;<cmd args>"
These aren't errors - they are just warnings that some parameters aren't configured, and suggesting that you can configure some options. Specifically you can ignore the second line (about JMX), but last line is more important - open conf/jvm.options and add the line with -XX:+ExitOnOutOfMemoryError option.

Apache Nifi windows unable to load NAR library bundles

I'm only attempting to launch the Nifi UI as a local instance to start playing with it. I've unzipped the package and made sure to set the JAVA_HOME variable to my Java 1.8. When I try to bin/run-nifi, in my nifi-app log, the error message is:
2018-05-03 15:03:50,585 INFO [main] org.apache.nifi.NiFi Launching NiFi...
2018-05-03 15:03:52,330 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Determined default nifi.properties path to be 'Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\conf\nifi.properties'
2018-05-03 15:03:52,363 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Loaded 146 properties from Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\conf\nifi.properties
2018-05-03 15:03:52,423 INFO [main] org.apache.nifi.NiFi Loaded 146 properties
2018-05-03 15:03:52,779 INFO [main] org.apache.nifi.BootstrapListener Started Bootstrap Listener, Listening for incoming requests on port 64802
2018-05-03 15:03:53,071 INFO [main] org.apache.nifi.BootstrapListener Successfully initiated communication with Bootstrap
2018-05-03 15:03:53,181 WARN [main] org.apache.nifi.nar.NarUnpacker Unable to load NAR library bundles due to java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework directory does not have read/write privilege Will proceed without loading any further Nar bundles
2018-05-03 15:03:53,242 ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi due to java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created
java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created
at org.apache.nifi.util.FileUtils.ensureDirectoryExistAndCanReadAndWrite(FileUtils.java:48)
at org.apache.nifi.nar.NarClassLoaders.load(NarClassLoaders.java:155)
at org.apache.nifi.nar.NarClassLoaders.init(NarClassLoaders.java:131)
at org.apache.nifi.NiFi.<init>(NiFi.java:133)
at org.apache.nifi.NiFi.<init>(NiFi.java:71)
at org.apache.nifi.NiFi.main(NiFi.java:292)
2018-05-03 15:03:53,383 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2018-05-03 15:03:53,387 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server shutdown completed (nicely or otherwise).
I've followed the installation instructions and haven't been able to trouble shoot. How do I load these NAR files upon running Nifi?
Thanks
I believe the underlying error in your output is java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created.
NiFi requires file permissions to create and write several directories, there is a list in the NiFi Admin Guide: How to install and start NiFi. NiFi does this to unpack the NAR files, write logs, and for various data repositories that comprise your data flow.
You have a few options:
Modify the permissions of the directory to allow NiFi read/write access. This can be done for each individual child directory.
Copy the entire NiFi distribution to a read/write location and run it from there.
Edit the conf/nifi-properties file to change the locations of these directories to read/write locations. See NiFi Admin Guide: System Properties for help on the properties.
Symlinks are a great solution for systems that support symlinks.
Two things you can try:
Run NiFi with administrator privilege (not a good practice) by going to ~\<NIFI_INSTALLATION_DIR>\bin and right click run-nifi.bat. Click Run as Administrator
Move NiFi directory to a location where the logged in user has full access to. Ex: C:\Users\<YOUR_USER>\Documents\. Now try to execute bin\run-nifi.bat
Similarly to the resolution that James proposed. I had to do the below 3-step process.
My scenario: I'm using docker containers and had the same problem. Even changing the user of my container to root didn't work. So, I did the following:
1 - Download Minifi https://nifi.apache.org/minifi/download.html
2 - Untar and execute the Minifi agent on my own laptop (I'm using MAC) so that the necessary folders and files will be created.
3 - Tar it up again and add to the DockerFile of my container creation
Done! Everything worked fine after that.

Launch Neo4j 2.3.2 from the commandline

I have installed Neo4j 2.3.2 Community Edition on Mac OS 10.10. I can launch the application and connect to it from localhost:7474/browser/. So far, so good.
I would like to launch Neo4j 2.3.2 from a Terminal window, so that I don't have the overhead of a windowed application running at the same time. When I run the following command...
$ ~/neo4j/bin/neo4j console
... I get this output in the Terminal window:
WARNING: Max 256 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Starting Neo4j Server console-mode...
Unable to find any JVMs matching version "1.7".
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow -XX:hashCode=5 -Dneo4j.ext.udc.source=tarball
2016-02-25 14:03:18.755+0000 INFO [API] Setting startup timeout to: 120000ms based on 120000
2016-02-25 14:03:58.356+0000 INFO [API] Successfully started database
2016-02-25 14:04:04.220+0000 INFO [API] Starting HTTP on port :7474 with 2 threads available
2016-02-25 14:04:13.512+0000 INFO [API] Enabling HTTPS on port :7473
09:04:20.201 [main] INFO org.eclipse.jetty.util.log - Logging initialized #98517ms
2016-02-25 14:04:23.034+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html]
2016-02-25 14:04:25.785+0000 INFO [API] Mounting static content at [/browser] from [browser]
09:04:25.993 [main] INFO org.eclipse.jetty.server.Server - jetty-9.2.4.v20141103
09:04:26.722 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.h.MovedContextHandler#1611ba2{/,null,AVAILABLE}
09:04:27.794 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /webadmin, did not find org.apache.jasper.servlet.JspServlet
09:04:27.981 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#132ea25{/webadmin,jar:file:/Users/james/neo4j/system/lib/neo4j-server-2.2.5-static-web.jar!/webadmin-html,AVAILABLE}
09:04:38.841 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#60bfaa02{/db/manage,null,AVAILABLE}
09:04:39.326 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#28e2e149{/db/data,null,AVAILABLE}
09:04:39.353 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet
09:04:39.355 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#78e6aa71{/browser,jar:file:/Users/james/neo4j/system/lib/neo4j-browser-2.2.5.jar!/browser,AVAILABLE}
09:04:39.536 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#4994d9ab{/,null,AVAILABLE}
09:04:39.745 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#2d19cf20{HTTP/1.1}{localhost:7474}
09:04:40.576 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#43c742c{SSL-HTTP/1.1}{localhost:7473}
09:04:40.577 [main] INFO org.eclipse.jetty.server.Server - Started #119058ms
2016-02-25 14:04:40.577+0000 INFO [API] Server started on: http://localhost:7474/
2016-02-25 14:04:40.590+0000 INFO [API] Remote interface ready and available at [http://localhost:7474/]
I have Java version 8, update 74 installed (build 1.8.0_74-b02), so I assume that I can ignore the warning Unable to find any JVMs matching version "1.7".
However, when I visit http://localhost:7474/ in Chrome Version 45.0.2454.85 (64-bit), I see three errors in the Developer Console: two files that fail to load and a subsequent script error.
localhost/:28 GET http://localhost:7474/browser/styles/68eddd94.main.css
localhost/:466 GET http://localhost:7474/browser/scripts/ded362b3.scripts.js
Uncaught Error: [$injector:modulerr] Failed to instantiate module neo4jApp due to:
Error: [$injector:nomod] Module 'neo4jApp' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument.
As a result, the Neo4j interface does not appear in the browser window.
Is it possible to run Neo4j 2.3.2 from the Terminal, and if so, what do I need to do to get http://localhost:7474/ to load correctly?
Shift-reload, or test in an incognito window.
Looks like a JS file mismatch due to aggressive browser caching.

PIG command execution

I am learning Hadoop by myself so I am not sure if what I asking is even a problem. When I run the command pig -x local to run it locally, i get the following message:
15/10/05 15:23:28 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/10/05 15:23:28 INFO pig.ExecTypeProvider: Picked LOCAL as the ExecType
2015-10-05 15:23:28,830 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-10-05 15:23:28,831 [main] INFO org.apache.pig.Main - Logging error messages to: /home/nkhl/pig_1444038808829.log
2015-10-05 15:23:29,050 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/nkhl/.pigbootup not found
2015-10-05 15:23:29,333 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-10-05 15:23:29,334 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-05 15:23:29,335 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2015-10-05 15:23:29,562 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
It looks different on my online tutor's screen so I am a little confused.
What concerns me most is the deprecation part. Can someone help me with that please? What is it trying to say? Don't get me wrong, everything works fine. The GRUNT shell loads up, and things execute fine. I just wanted to know what that meant.
It's an Ubuntu machine.
Thanks!
Running pig as local is great AFAIK if you are using for some quick testing.Like displaying the sysout in UDF etc.
The above warnings you can safely ignore.It is saying that some of the variables set in conf-site.xml are deprecated.
You can switch off those parameters by editing the
log4j.logger.org.apache.hadoop.conf.Configuration.deprecation
in log4j.properties file.
You have some Hadoop-related variables set, such as HADOOP_HOME or HADOOP_PREFIX or HADOOP_CONF_DIR, which aren't needed if you are running Pig in local mode.
unset HADOOP_HOME
unset HADOOP_PREFIX
unset HADOOP_CONF_DIR
Deprecations aren't scary. They are a reminder that the code is calling on something that will eventually go away in a future version. These specific deprecations are caused by differences between Hadoop 1 vs Hadoop 2. Pig is compatible with both versions. If you happened to be using Hadoop 1.2.1 instead of 2.x, you wouldn't see the warnings. This is because Pig is checking the Hadoop 1 values first.
If you're interested in learning more, you can check out the Pig source code.
https://github.com/apache/pig/blob/release-0.15.0/src/org/apache/pig/backend/hadoop/executionengine/HExecutionEngine.java#L219-L222

Resources