I have installed arangodb through brew. I am new to both mac and arangodb. Right after installation of arangodb I could start stop it through brew services. But since yesterday that didn't work. However arangod start worked. Today its taking really long time for the service to start up
$ arangod start
2018-04-30T07:40:32Z [3593] INFO ArangoDB 3.3.7 [darwin] 64bit, using jemalloc, build , VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.0.2o 27 Mar 2018
2018-04-30T07:40:32Z [3593] INFO {authentication} Jwt secret not specified, generating...
2018-04-30T07:40:32Z [3593] INFO using storage engine mmfiles
2018-04-30T07:40:32Z [3593] INFO {cluster} Starting up with role SINGLE
2018-04-30T07:40:32Z [3593] INFO {syscall} file-descriptors (nofiles) hard limit is unlimited, soft limit is 8192
2018-04-30T07:40:32Z [3593] INFO {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2018-04-30T07:40:32Z [3593] INFO running WAL recovery (1 logfiles)
2018-04-30T07:40:32Z [3593] INFO replaying WAL logfile '/Users/neel/start/journals/logfile-17009.db' (1 of 1)
2018-04-30T07:40:32Z [3593] INFO WAL recovery finished successfully
2018-04-30T07:40:33Z [3593] INFO using endpoint 'http+tcp://127.0.0.1:8529' for non-encrypted requests
2018-04-30T07:41:33Z [3593] WARNING {v8} giving up waiting for unused V8 context after 60.000000 s
2018-04-30T07:41:43Z [3593] WARNING {v8} giving up waiting for unused V8 context after 60.000000 s
2018-04-30T07:42:34Z [3593] WARNING {v8} giving up waiting for unused V8 context after 60.000000 s
2018-04-30T07:43:05Z [3593] INFO ArangoDB (version 3.3.7 [darwin]) is ready for business. Have fun!
I don't know where are the log files. So when I try to start with brew services start arangodb I can't check whether it has been started or not as it responds Successfully startedarangodb(label: homebrew.mxcl.arangodb) immediately. So my questions are why its delaying ? and where are the log files ?
The log files are located here: /usr/local/var/log/arangodb3
The delay above is caused by lack of available V8 contexts. You can adjust them by setting them in /usr/local/etc/arangodb3/arangod.conf. But the default value there is set to 0, which means that arangodb is to choose how many are running.
Related
Attempting to run h2o on a HDP 3.1 cluster and running into error that appears to be about YARN resource capacity...
[ml1user#HW04 h2o-3.26.0.1-hdp3.1]$ hadoop jar h2odriver.jar -nodes 3 -mapperXmx 10g
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 192.168.122.1]
[Possible callback IP address: 172.18.4.49]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 172.18.4.49:46015
(You can override these with -driverif and -driverport/-driverportrange and/or specify external IP using -extdriverif.)
Memory Settings:
mapreduce.map.java.opts: -Xms10g -Xmx10g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=true
Extra memory percent: 10
mapreduce.map.memory.mb: 11264
Hive driver not present, not generating token.
19/07/25 14:48:05 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/07/25 14:48:06 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
19/07/25 14:48:07 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/ml1user/.staging/job_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: number of splits:3
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: Executing with tokens: []
19/07/25 14:48:08 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
19/07/25 14:48:08 INFO impl.YarnClientImpl: Submitted application application_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.Job: The url to track the job: http://HW01.ucera.local:8088/proxy/application_1564020515809_0006/
Job name 'H2O_47159' submitted
JobTracker job ID is 'job_1564020515809_0006'
For YARN users, logs command is 'yarn logs -applicationId application_1564020515809_0006'
Waiting for H2O cluster to come up...
ERROR: Timed out waiting for H2O cluster to come up (120 seconds)
ERROR: (Try specifying the -timeout option to increase the waiting time limit)
Attempting to clean up hadoop job...
19/07/25 14:50:19 INFO impl.YarnClientImpl: Killed application application_1564020515809_0006
Killed.
19/07/25 14:50:23 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/07/25 14:50:23 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
----- YARN cluster metrics -----
Number of YARN worker nodes: 3
----- Nodes -----
Node: http://HW03.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW04.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW02.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
----- Queues -----
Queue name: default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 1.00
Maximum capacity: 1.00
Application count: 0
Queue 'default' approximate utilization: 0.0 / 45.0 GB used, 0 / 9 vcores used
----------------------------------------------------------------------
ERROR: Unable to start any H2O nodes; please contact your YARN administrator.
A common cause for this is the requested container size (11.0 GB)
exceeds the following YARN settings:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1564020515809_0006'
Looking in the YARN configs in Ambari UI, these properties are nowhere to be found. But checking the YARN logs in the YARN resource manager UI and checking some of the logs for the killed application, I see what appears to be unreachable-host errors...
Container: container_e05_1564020515809_0006_02_000002 on HW03.ucera.local_45454_1564102219781
LogAggregationType: AGGREGATED
=============================================================================================
LogType:stderr
LogLastModifiedTime:Thu Jul 25 14:50:19 -1000 2019
LogLength:2203
LogContents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/11/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/usercache/ml1user/appcache/application_1564020515809_0006/filecache/10/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.YarnChild).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
java.net.NoRouteToHostException: No route to host (Host unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
....
at java.net.Socket.<init>(Socket.java:211)
at water.hadoop.EmbeddedH2OConfig$BackgroundWriterThread.run(EmbeddedH2OConfig.java:38)
End of LogType:stderr
***********************************************************************
Taking note of "java.net.NoRouteToHostException: No route to host (Host unreachable)". However, I can access all the other nodes from each other and they can all ping each other, so not sure what is going on here. Any suggestions for debugging or fixing?
Think I found the problem, TLDR: firewalld (nodes running on centos7) was still running, when should be disabled on HDP clusters.
From another community post:
For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows:
systemctl disable firewalld
service firewalld stop
So apparently iptables and firewalld need to be disabled across the cluster (supporting docs can be found here, I only disabled them on the Ambari installation node). After stopping these services across the cluster (I recommend using clush), was able to run the yarn job without incident.
Normally, this problem is either due to bad DNS configuration, firewalls, or network unreachability. To quote this official doc:
The hostname of the remote machine is wrong in the configuration files
The client's host table /etc/hosts has an invalid IPAddress for the target host.
The DNS server's host table has an invalid IPAddress for the target host.
The client's routing tables (In Linux, iptables) are wrong.
The DHCP server is publishing bad routing information.
Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs
For me, the problem was that the driver was inside a Docker container which made it impossible for the workers to send data back to it. In other words, workers and the driver not being in the same subnet. The solution as given in this answer was to set the following configurations:
spark.driver.host=<container's host IP accessible by the workers>
spark.driver.bindAddress=0.0.0.0
spark.driver.port=<forwarded port 1>
spark.driver.blockManager.port=<forwarded port 2>
I'm only attempting to launch the Nifi UI as a local instance to start playing with it. I've unzipped the package and made sure to set the JAVA_HOME variable to my Java 1.8. When I try to bin/run-nifi, in my nifi-app log, the error message is:
2018-05-03 15:03:50,585 INFO [main] org.apache.nifi.NiFi Launching NiFi...
2018-05-03 15:03:52,330 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Determined default nifi.properties path to be 'Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\conf\nifi.properties'
2018-05-03 15:03:52,363 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Loaded 146 properties from Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\conf\nifi.properties
2018-05-03 15:03:52,423 INFO [main] org.apache.nifi.NiFi Loaded 146 properties
2018-05-03 15:03:52,779 INFO [main] org.apache.nifi.BootstrapListener Started Bootstrap Listener, Listening for incoming requests on port 64802
2018-05-03 15:03:53,071 INFO [main] org.apache.nifi.BootstrapListener Successfully initiated communication with Bootstrap
2018-05-03 15:03:53,181 WARN [main] org.apache.nifi.nar.NarUnpacker Unable to load NAR library bundles due to java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework directory does not have read/write privilege Will proceed without loading any further Nar bundles
2018-05-03 15:03:53,242 ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi due to java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created
java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created
at org.apache.nifi.util.FileUtils.ensureDirectoryExistAndCanReadAndWrite(FileUtils.java:48)
at org.apache.nifi.nar.NarClassLoaders.load(NarClassLoaders.java:155)
at org.apache.nifi.nar.NarClassLoaders.init(NarClassLoaders.java:131)
at org.apache.nifi.NiFi.<init>(NiFi.java:133)
at org.apache.nifi.NiFi.<init>(NiFi.java:71)
at org.apache.nifi.NiFi.main(NiFi.java:292)
2018-05-03 15:03:53,383 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2018-05-03 15:03:53,387 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server shutdown completed (nicely or otherwise).
I've followed the installation instructions and haven't been able to trouble shoot. How do I load these NAR files upon running Nifi?
Thanks
I believe the underlying error in your output is java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created.
NiFi requires file permissions to create and write several directories, there is a list in the NiFi Admin Guide: How to install and start NiFi. NiFi does this to unpack the NAR files, write logs, and for various data repositories that comprise your data flow.
You have a few options:
Modify the permissions of the directory to allow NiFi read/write access. This can be done for each individual child directory.
Copy the entire NiFi distribution to a read/write location and run it from there.
Edit the conf/nifi-properties file to change the locations of these directories to read/write locations. See NiFi Admin Guide: System Properties for help on the properties.
Symlinks are a great solution for systems that support symlinks.
Two things you can try:
Run NiFi with administrator privilege (not a good practice) by going to ~\<NIFI_INSTALLATION_DIR>\bin and right click run-nifi.bat. Click Run as Administrator
Move NiFi directory to a location where the logged in user has full access to. Ex: C:\Users\<YOUR_USER>\Documents\. Now try to execute bin\run-nifi.bat
Similarly to the resolution that James proposed. I had to do the below 3-step process.
My scenario: I'm using docker containers and had the same problem. Even changing the user of my container to root didn't work. So, I did the following:
1 - Download Minifi https://nifi.apache.org/minifi/download.html
2 - Untar and execute the Minifi agent on my own laptop (I'm using MAC) so that the necessary folders and files will be created.
3 - Tar it up again and add to the DockerFile of my container creation
Done! Everything worked fine after that.
I have installed Neo4j 2.3.2 Community Edition on Mac OS 10.10. I can launch the application and connect to it from localhost:7474/browser/. So far, so good.
I would like to launch Neo4j 2.3.2 from a Terminal window, so that I don't have the overhead of a windowed application running at the same time. When I run the following command...
$ ~/neo4j/bin/neo4j console
... I get this output in the Terminal window:
WARNING: Max 256 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Starting Neo4j Server console-mode...
Unable to find any JVMs matching version "1.7".
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow -XX:hashCode=5 -Dneo4j.ext.udc.source=tarball
2016-02-25 14:03:18.755+0000 INFO [API] Setting startup timeout to: 120000ms based on 120000
2016-02-25 14:03:58.356+0000 INFO [API] Successfully started database
2016-02-25 14:04:04.220+0000 INFO [API] Starting HTTP on port :7474 with 2 threads available
2016-02-25 14:04:13.512+0000 INFO [API] Enabling HTTPS on port :7473
09:04:20.201 [main] INFO org.eclipse.jetty.util.log - Logging initialized #98517ms
2016-02-25 14:04:23.034+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html]
2016-02-25 14:04:25.785+0000 INFO [API] Mounting static content at [/browser] from [browser]
09:04:25.993 [main] INFO org.eclipse.jetty.server.Server - jetty-9.2.4.v20141103
09:04:26.722 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.h.MovedContextHandler#1611ba2{/,null,AVAILABLE}
09:04:27.794 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /webadmin, did not find org.apache.jasper.servlet.JspServlet
09:04:27.981 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#132ea25{/webadmin,jar:file:/Users/james/neo4j/system/lib/neo4j-server-2.2.5-static-web.jar!/webadmin-html,AVAILABLE}
09:04:38.841 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#60bfaa02{/db/manage,null,AVAILABLE}
09:04:39.326 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#28e2e149{/db/data,null,AVAILABLE}
09:04:39.353 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet
09:04:39.355 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#78e6aa71{/browser,jar:file:/Users/james/neo4j/system/lib/neo4j-browser-2.2.5.jar!/browser,AVAILABLE}
09:04:39.536 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#4994d9ab{/,null,AVAILABLE}
09:04:39.745 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#2d19cf20{HTTP/1.1}{localhost:7474}
09:04:40.576 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#43c742c{SSL-HTTP/1.1}{localhost:7473}
09:04:40.577 [main] INFO org.eclipse.jetty.server.Server - Started #119058ms
2016-02-25 14:04:40.577+0000 INFO [API] Server started on: http://localhost:7474/
2016-02-25 14:04:40.590+0000 INFO [API] Remote interface ready and available at [http://localhost:7474/]
I have Java version 8, update 74 installed (build 1.8.0_74-b02), so I assume that I can ignore the warning Unable to find any JVMs matching version "1.7".
However, when I visit http://localhost:7474/ in Chrome Version 45.0.2454.85 (64-bit), I see three errors in the Developer Console: two files that fail to load and a subsequent script error.
localhost/:28 GET http://localhost:7474/browser/styles/68eddd94.main.css
localhost/:466 GET http://localhost:7474/browser/scripts/ded362b3.scripts.js
Uncaught Error: [$injector:modulerr] Failed to instantiate module neo4jApp due to:
Error: [$injector:nomod] Module 'neo4jApp' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument.
As a result, the Neo4j interface does not appear in the browser window.
Is it possible to run Neo4j 2.3.2 from the Terminal, and if so, what do I need to do to get http://localhost:7474/ to load correctly?
Shift-reload, or test in an incognito window.
Looks like a JS file mismatch due to aggressive browser caching.
I have a fresh install of Hortonworks version 2.3_1 for oracle virtualbox and I get a java.net.SocketTimeoutException whenever I try to run a mapreduce job. I changed nothing other than the memory and the cores available to the VM.
full text of run:
WARNING: Use "yarn jar" to launch YARN applications.
15/09/01 01:15:17 INFO impl.TimelineClientImpl: Timeline service address: http:/ /sandbox.hortonworks.com:8188/ws/v1/timeline/
15/09/01 01:15:20 INFO client.RMProxy: Connecting to ResourceManager at sandbox. hortonworks.com/10.0.2.15:8050
15/09/01 01:16:19 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your applicatio n with ToolRunner to remedy this.
15/09/01 01:18:09 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor excepti on for block BP-601678901-10.0.2.15-1439987491556:blk_1073742292_1499
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.0 .2.15:52924 remote=/10.0.2.15:50010]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.ja va:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1 61)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1 31)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1 18)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java :2280)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(P ipelineAck.java:244)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor .run(DFSOutputStream.java:749)
15/09/01 01:18:11 INFO mapreduce.JobSubmitter: Cleaning up the staging area /use r/root/.staging/job_1441069639378_0001
Exception in thread "main" java.io.IOException: All datanodes DatanodeInfoWithStorage[10.0.2.15:50010,DS-56099a5f-3cb3-426e-8e1a-ff3b53df9bf2,DISK] are bad. Aborting...
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1117)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412)
Full name of file ova file I am using: Sandbox_HDP_2.3_1_virtualbox.ova
my host is a window 7 home premium machine with eight lines of execution(four hyperthreaded cores, I think)
The problem was exactly what it seemed a timeout error. Fixed by going to the hadoop config folder and raising all the timeouts as well as the number of retries (although from the log that didn't come into play) and stopping unnecessary services on both the host and guest operating system.
Thank, sunrise76 on of those issues pointed me to the config folder.
When using the HBase shell, I'm getting a great deal of logging, including INFO and DEBUG messages. While this is interesting in terms of learning HBase internals, it is quite verbose and can bury the output.
I've tried changing the logging levels in a number of different ways, including as described here, and while some of the warnings do disappear, I continue to get a large number of INFO and DEBUG messages, i.e.:
18:50:49.500 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
18:50:49.516 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=ip-10-234-8-223.ec2.internal
18:50:49.517 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.7.0_65
18:50:49.517 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
Besides $HBASE_HOME/conf/log4j.properties, I've tried running the shell outside the $HBASE_HOME/bin/hbase shell-script. Even setting log4j.rootLogger=OFF doesn't seem to help. Attempting to use Logger.getRootLogger().setLevel(Level.WARN);, per the above link, did not work either.
Are these messages being emitted by a JRuby logger? Are they returned as text to the shell by other components?
Edit and adjust the log levels in "log4j.properties" file which is in "hbase/conf/" folder.