Selenium stalls at "Launching Firefox...", no errors or exceptions - firefox

Trying to run Selenium on our RedHat box remotely just stays at "Launching Firefox..." without any error messages to go on.
I have a symlink from /usr/bin/firefox that goes to /usr/lib64/firefox/firefox. The RedHat machine has Firefox ESR 17.0.6 installed.
I'm using Xming and running Firefox by just typing "firefox" in the terminal works fine. I tried running Selenium through Xvfb, but it hangs at the same place (Xvfb verified working generally with "firefox &" and taking a screenshot).
The below is the terminal input and output (anonymized):
[user#redhat selenium-test]$ java -jar selenium-server-standalone.jar -trustAllSSLCertificates -htmlSuite "*firefox" https://BASEURL.com/ suite_FILE.html tmp_results-FILE.html -firefoxProfileTemplate "/home/user/.mozilla/firefox/wwjnyifu.Selenium"
Jun 25, 2013 2:51:41 PM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
14:51:41.817 INFO - Java: Sun Microsystems Inc. 20.12-b01
14:51:41.818 INFO - OS: Linux 2.6.32-279.el6.x86_64 amd64
14:51:41.836 INFO - v2.33.0, with Core v2.33.0. Built from revision 4e90c97
14:51:41.981 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
14:51:41.982 INFO - Version Jetty/5.1.x
14:51:41.983 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
14:51:41.983 INFO - Started HttpContext[/selenium-server,/selenium-server]
14:51:41.984 INFO - Started HttpContext[/,/]
14:51:52.538 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#c0b76fa
14:51:52.538 INFO - Started HttpContext[/wd,/wd]
14:51:52.546 INFO - Started SocketListener on 0.0.0.0:4444
14:51:52.546 INFO - Started org.openqa.jetty.jetty.Server#b34bed0
jar:file:/home/user/selenium-test/selenium-server-standalone.jar!/customProfileDirCUSTFFCHROME
14:51:52.791 INFO - Preparing Firefox profile...
14:51:53.343 INFO - Launching Firefox...
^C15:03:18.657 INFO - Shutting down...
I gave it almost 10 minutes before pressing CTRL+C.
With debugging, not much more to go on:
08:40:37.183 INFO [10] org.openqa.grid.selenium.GridLauncher - Launching a standalone server
08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - Writing debug logs to selenium.log
08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - Java: Sun Microsystems Inc. 20.12-b01
08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - OS: Linux 2.6.32-279.el6.x86_64 amd64
08:40:37.259 INFO [10] org.openqa.selenium.server.SeleniumServer - v2.33.0, with Core v2.33.0. Built from revision 4e90c97
08:40:37.420 INFO [10] org.openqa.selenium.server.SeleniumServer - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
08:40:37.421 INFO [10] org.openqa.jetty.http.HttpServer - Version Jetty/5.1.x
08:40:37.422 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
08:40:37.423 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server,/selenium-server]
08:40:37.423 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/,/]
08:40:37.439 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.servlet.ServletHandler#851052d
08:40:37.439 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/wd,/wd]
08:40:37.444 INFO [10] org.openqa.jetty.http.SocketListener - Started SocketListener on 0.0.0.0:4444
08:40:37.445 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.Server#252f0999
08:40:37.737 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Preparing Firefox profile...
08:40:38.289 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Launching Firefox...
08:42:56.271 INFO [10] org.openqa.grid.selenium.GridLauncher - Launching a standalone server
08:42:56.335 INFO [10] org.openqa.selenium.server.SeleniumServer - Writing debug logs to selenium.log
08:42:56.336 INFO [10] org.openqa.selenium.server.SeleniumServer - Java: Sun Microsystems Inc. 20.12-b01
08:42:56.336 INFO [10] org.openqa.selenium.server.SeleniumServer - OS: Linux 2.6.32-279.el6.x86_64 amd64
08:42:56.356 INFO [10] org.openqa.selenium.server.SeleniumServer - v2.33.0, with Core v2.33.0. Built from revision 4e90c97
08:42:56.357 INFO [10] org.openqa.selenium.server.SeleniumServer - Selenium server running in debug mode.
08:42:56.376 DEBUG [10] org.openqa.jetty.util.Container - add component: SocketListener0#0.0.0.0:4444
08:42:56.397 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache#39617189
08:42:56.401 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.ProxyHandler in HttpContext[/,/]
08:42:56.401 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/,/]
08:42:56.402 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/,/] for host *
08:42:56.403 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache#2d20cc56
08:42:56.404 DEBUG [10] org.openqa.jetty.http.HttpContext - added SC{BASIC,null,user,CONFIDENTIAL} at /org/openqa/selenium/tests/html/basicAuth/*
08:42:56.412 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.handler.SecurityHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.415 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.StaticContentHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.416 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.SessionExtensionJsHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.416 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.htmlrunner.SingleTestSuiteResourceHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.417 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.htmlrunner.SeleniumHTMLRunnerResultsHandler#56406199
08:42:56.417 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/selenium-server,/selenium-server]
08:42:56.418 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/selenium-server,/selenium-server] for host *
08:42:56.471 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache#1d10c424
08:42:56.487 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.488 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/selenium-server/driver,/selenium-server/driver]
08:42:56.488 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/selenium-server/driver,/selenium-server/driver] for host *
08:42:56.488 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache#5b40c281
08:42:56.501 DEBUG [10] org.openqa.jetty.util.Container - add component: WebDriver remote server
08:42:56.506 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.jetty.servlet.HashSessionManager#7df17e77
08:42:56.506 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.jetty.servlet.ServletHandler#79a5f739
08:42:56.507 INFO [10] org.openqa.selenium.server.SeleniumServer - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
08:42:56.507 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/wd,/wd]
08:42:56.508 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/wd,/wd] for host *
08:42:56.508 DEBUG [10] org.openqa.jetty.util.Container - Starting org.openqa.jetty.jetty.Server#252f0999
08:42:56.509 INFO [10] org.openqa.jetty.http.HttpServer - Version Jetty/5.1.x
08:42:56.509 DEBUG [10] org.openqa.jetty.http.HttpServer - LISTENERS: [SocketListener0#0.0.0.0:4444]
08:42:56.509 DEBUG [10] org.openqa.jetty.http.HttpServer - HANDLER: {null={/selenium-server/driver/*=[HttpContext[/selenium-server/driver,/selenium-server/driver]], /selenium-server/*=[HttpContext[/selenium-server,/selenium-server]], /=[HttpContext[/,/]], /wd/*=[HttpContext[/wd,/wd]]}}
08:42:56.510 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/selenium-server/driver,/selenium-server/driver]
08:42:56.510 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader#4aad3ba4 for HttpContext[/selenium-server/driver,/selenium-server/driver]
08:42:56.510 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
08:42:56.510 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/selenium-server,/selenium-server]
08:42:56.510 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader#4aad3ba4 for HttpContext[/selenium-server,/selenium-server]
08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.jetty.http.handler.SecurityHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.StaticContentHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.SessionExtensionJsHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.htmlrunner.SingleTestSuiteResourceHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.512 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server]
08:42:56.512 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server,/selenium-server]
08:42:56.520 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/,/]
08:42:56.520 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader#4aad3ba4 for HttpContext[/,/]
08:42:56.520 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.ProxyHandler in HttpContext[/,/]
08:42:56.521 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/,/]
08:42:56.521 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/wd,/wd]
08:42:56.521 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader#4aad3ba4 for HttpContext[/wd,/wd]
08:42:56.521 DEBUG [10] org.openqa.jetty.util.Container - Starting org.openqa.jetty.jetty.servlet.ServletHandler#79a5f739
08:42:56.521 DEBUG [10] org.openqa.jetty.jetty.servlet.AbstractSessionManager - New random session seed
08:43:07.962 DEBUG [10] org.openqa.jetty.jetty.servlet.Holder - Started holder of class org.openqa.selenium.remote.server.DriverServlet
08:43:07.962 DEBUG [11] org.openqa.jetty.jetty.servlet.AbstractSessionManager - Session scavenger period = 30s
08:43:07.962 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.servlet.ServletHandler#79a5f739
08:43:07.962 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/wd,/wd]
08:43:07.970 INFO [10] org.openqa.jetty.http.SocketListener - Started SocketListener on 0.0.0.0:4444
08:43:07.970 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.Server#252f0999
08:43:07.983 DEBUG [10] org.openqa.selenium.server.browserlaunchers.BrowserLauncherFactory - Requested browser string '*firefox' matches *firefox
08:43:07.984 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.CombinedFirefoxLocator - Discovering Firefox 2...
08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Discovering Firefox 2...
08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Minefield.app/Contents/MacOS/firefox-bin' is valid...
08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Firefox-2.app/Contents/MacOS/firefox-bin' is valid...
08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Firefox.app/Contents/MacOS/firefox-bin' is valid...
08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/usr/lib/firefox/firefox-bin' is valid...
08:43:08.008 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/usr/bin/firefox-bin' is valid...
08:43:08.010 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Discovered valid Firefox 2 launcher : '/usr/bin/firefox-bin'
08:43:08.351 DEBUG [10] org.openqa.selenium.server.browserlaunchers.ResourceExtractor - Extracting /customProfileDirCUSTFFCHROME to /tmp/customProfileDir987977
08:43:08.432 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Preparing Firefox profile...
08:43:08.984 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Launching Firefox...
08:43:09.988 INFO [12] org.openqa.selenium.server.SeleniumServer - Shutting down...
Any ideas on where to start looking, or any fixes?

Finally found out the problem.
Firefox started successfully when I expanded the "*firefox" argument to include the absolute path to Firefox (the symlink obviously didn't work).

In selenium 2.0 the full path is not allowed.
It says:
Supported browsers include:
*firefox
*mock
...
*custom

Related

Sonar server is not coming up

I have installed Sonar on a windows machine and when I am trying to run it am getting the below error and the sonar is not coming up.
Could someone help on this.
Sonar start up log (below) -
*Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
D:\Agile\NewSonarQube\sonarqube-5.6.6\bin\windows-x86-64>StartSonar.bat
wrapper | --> Wrapper Started as Console
wrapper | Launching a JVM...
jvm 1 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
jvm 1 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
jvm 1 |
jvm 1 | 2019.06.24 16:30:05 INFO app[o.s.a.AppFileSystem] Cleaning or creating temp directory D:\Agile\NewSonarQube\sonarqube-5.6.6\temp
jvm 1 | 2019.06.24 16:30:26 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[es]: D:\Agile\Java\jdk1.8.0_65\jre\bin\java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+
UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=D:\Agile\NewSonarQube\sonarqube-5.6.6\temp -javaagent:D:\Agile\Java\jdk1.8.0_65\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/search/* org.sonar.search.SearchServer D:\Agile\NewSonarQube\sonarqube-5.6.6\temp\
sq-process5784289335367032457properties
jvm 1 | 2019.06.24 16:32:29 INFO app[o.s.p.m.Monitor] Process[es] is up
jvm 1 | 2019.06.24 16:32:29 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[web]: D:\Agile\Java\jdk1.8.0_65\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
-Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=D:\Agile\NewSonarQube\sonarqube-5.6.6\temp -javaagent:D:\Agile\Java\jdk1.8.0_65\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/server/*;D:\Agile\NewSonarQube\sonarqube-5.6.6\lib\jdbc\postgresql\postgresql-9.3-1102-jdbc41.jar org.sonar.ser
ver.app.WebServer D:\Agile\NewSonarQube\sonarqube-5.6.6\temp\sq-process7712868170609596655properties
jvm 1 | 2019.06.24 16:34:52 INFO app[o.s.p.m.Monitor] Process[web] is up
jvm 1 | 2019.06.24 16:34:52 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[ce]: D:\Agile\Java\jdk1.8.0_65\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=D:\Agile\NewSonarQub
e\sonarqube-5.6.6\temp -javaagent:D:\Agile\Java\jdk1.8.0_65\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/server/*;./lib/ce/*;D:\Agile\NewSonarQube\sonarqube-5.6.6\lib\jdbc\postgresql\postgresql-9.3-1102-jdbc41.jar org.sonar.ce.app.CeServer D:\Agile\NewSonarQube\sonarqube-5.6.6\temp\sq-
process6453747015728432890properties
jvm 1 | 2019.06.24 16:35:21 INFO app[o.s.p.m.Monitor] Process[ce] is up
jvm 1 | 2019.06.24 16:35:21 INFO app[o.s.p.m.Monitor] Process[web] is stopping
jvm 1 | 2019.06.24 16:35:30 INFO app[o.s.p.m.Monitor] Process[web] is stopped
jvm 1 | 2019.06.24 16:35:30 INFO app[o.s.p.m.Monitor] Process[es] is stopping
jvm 1 | 2019.06.24 16:35:33 INFO app[o.s.p.m.Monitor] Process[es] is stopped
wrapper | <-- Wrapper Stopped*
Also attaching the 'sonar.log' here below -
--> Wrapper Started as Console
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2019.07.03 11:24:28 INFO app[o.s.a.AppFileSystem] Cleaning or creating temp directory D:\Agile\NewSonarQube\sonarqube-5.6.6\temp
2019.07.03 11:24:30 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[es]: D:\Agile\Java\jdk1.8.0_65\jre\bin\java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=D:\Agile\NewSonarQube\sonarqube-5.6.6\temp -javaagent:D:\Agile\Java\jdk1.8.0_65\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/search/* org.sonar.search.SearchServer D:\Agile\NewSonarQube\sonarqube-5.6.6\temp\sq-process7809422627146396746properties
2019.07.03 11:24:30 INFO es[o.s.p.ProcessEntryPoint] Starting es
2019.07.03 11:24:30 INFO es[o.s.s.EsSettings] Elasticsearch listening on 127.0.0.1:9001
2019.07.03 11:24:31 INFO es[o.elasticsearch.node] [sonar-1562133268619] version[1.7.5], pid[8484], build[00f95f4/2016-02-02T09:55:30Z]
2019.07.03 11:24:31 INFO es[o.elasticsearch.node] [sonar-1562133268619] initializing ...
2019.07.03 11:24:31 INFO es[o.e.plugins] [sonar-1562133268619] loaded [], sites []
2019.07.03 11:24:31 INFO es[o.elasticsearch.env] [sonar-1562133268619] using [1] data paths, mounts [[(D:)]], net usable_space [79.6gb], net total_space [131.9gb], types [NTFS]
2019.07.03 11:24:32 WARN es[o.e.bootstrap] JNA not found. native methods will be disabled.
2019.07.03 11:24:33 INFO es[o.elasticsearch.node] [sonar-1562133268619] initialized
2019.07.03 11:24:33 INFO es[o.elasticsearch.node] [sonar-1562133268619] starting ...
2019.07.03 11:24:33 INFO es[o.e.transport] [sonar-1562133268619] bound_address {inet[/127.0.0.1:9001]}, publish_address {inet[/127.0.0.1:9001]}
2019.07.03 11:24:33 INFO es[o.e.discovery] [sonar-1562133268619] sonarqube/XGZJsELLSXuNlBZ4G449tw
2019.07.03 11:24:36 INFO es[o.e.cluster.service] [sonar-1562133268619] new_master [sonar-1562133268619][XGZJsELLSXuNlBZ4G449tw][CLBBLR-5070][inet[/127.0.0.1:9001]]{rack_id=sonar-1562133268619}, reason: zen-disco-join (elected_as_master)
2019.07.03 11:24:36 INFO es[o.elasticsearch.node] [sonar-1562133268619] started
2019.07.03 11:24:36 INFO es[o.e.gateway] [sonar-1562133268619] recovered [6] indices into cluster_state
2019.07.03 11:24:39 INFO app[o.s.p.m.Monitor] Process[es] is up
2019.07.03 11:24:39 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[web]: D:\Agile\Java\jdk1.8.0_65\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=D:\Agile\NewSonarQube\sonarqube-5.6.6\temp -javaagent:D:\Agile\Java\jdk1.8.0_65\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/server/*;D:\Agile\NewSonarQube\sonarqube-5.6.6\lib\jdbc\postgresql\postgresql-9.3-1102-jdbc41.jar org.sonar.server.app.WebServer D:\Agile\NewSonarQube\sonarqube-5.6.6\temp\sq-process6230173931576405711properties
2019.07.03 11:24:40 INFO web[o.s.p.ProcessEntryPoint] Starting web
2019.07.03 11:24:41 INFO web[o.s.s.a.TomcatContexts] Webapp directory: D:\Agile\NewSonarQube\sonarqube-5.6.6\web
2019.07.03 11:24:41 INFO web[o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2019.07.03 11:24:41 INFO web[o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2019.07.03 11:24:42 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 5.6.6 / e8e13145497bb920921ae4fe11b09f0903f1d298
2019.07.03 11:24:42 INFO web[o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://localhost/sonar
2019.07.03 11:24:43 INFO web[o.s.s.p.DefaultServerFileSystem] SonarQube home: D:\Agile\NewSonarQube\sonarqube-5.6.6
2019.07.03 11:24:43 INFO web[o.e.plugins] [sonar-1562133268619] loaded [], sites []
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Android / 1.1 / 9ab2bbcc83177e67c74d365f009bfe05bf38c7e3
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin C# / 6.3.0.2862 / 1600df4ea68defd03b270da8e2935cfbf342cd8e
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin CSS / SCSS / Less / 3.1 / 58a0a86a53f82a8486a5ee93681257e1ae0a10c8
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Checkstyle / 3.7 /
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Cobertura / 1.7 / d14b7978322d19a2795286ebfed225a57b13c3af
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin CodeCracker for C# / 1.0.1
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin ESLint / 0.1.1
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Git / 1.2 / a713dd64daf8719ba4e7f551f9a1966c62690c17
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin PL/SQL (Community) / 2.0.0-SNAPSHOT /
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin PMD / 2.6 / f419f834b4bea51f9b6da33517b7f6186db5c066
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin SVN / 1.3 / aff503d48bc77b07c2b62abf93249d0a20bd355c
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin SonarJS / 3.1.1.5128 / 564d130c281685fde03e85964417158e32a17b2b
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin SonarJava / 4.12.0.11033 / fa9fe66b0cb1b6ab8d7f341cc1f7908897a9cf33
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin SonarPHP / 2.11.0.2485 / 741861a29e5f9a26c6c99c06268facb6c4f4a882
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin SonarXML / 1.4.3.1027 / 39588245cecf538bb27be4e496ff303b0143d20b
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Sonargraph / 3.5
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Timeline / 1.5 / a9cae1328fd455a128b5d7d603381f47398c6e2a
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin TypeScript / 1.2.0-rc1
2019.07.03 11:24:44 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin Web / 2.5.0.476 / 636872f5d37fa7a440fe07d08d504e1a881225e5
2019.07.03 11:24:47 INFO web[o.s.s.p.RailsAppsDeployer] Deploying Ruby on Rails applications
2019.07.03 11:24:49 INFO web[o.s.s.p.UpdateCenterClient] Update center: https://update.sonarsource.org/update-center.properties (no proxy)
2019.07.03 11:24:49 INFO web[o.s.s.n.NotificationService] Notification service started (delay 60 sec.)
2019.07.03 11:24:49 INFO web[o.s.s.s.RegisterMetrics] Register metrics
2019.07.03 11:24:50 INFO web[o.s.s.r.RegisterRules] Register rules
2019.07.03 11:24:53 INFO web[o.s.s.q.RegisterQualityProfiles] Register quality profiles
2019.07.03 11:24:56 INFO web[o.s.s.s.RegisterNewMeasureFilters] Register measure filters
2019.07.03 11:24:56 INFO web[o.s.s.s.RegisterDashboards] Register dashboards
2019.07.03 11:24:56 INFO web[o.s.s.s.RegisterPermissionTemplates] Register permission templates
2019.07.03 11:24:56 INFO web[o.s.s.s.RenameDeprecatedPropertyKeys] Rename deprecated property keys
2019.07.03 11:24:56 INFO web[o.s.s.e.IndexerStartupTask] Index activities
2019.07.03 11:24:56 INFO web[o.s.s.e.IndexerStartupTask] Index issues
2019.07.03 11:24:57 INFO web[o.s.s.e.IndexerStartupTask] Index tests
2019.07.03 11:24:57 INFO web[o.s.s.e.IndexerStartupTask] Index users
2019.07.03 11:24:57 INFO web[o.s.s.e.IndexerStartupTask] Index views
2019.07.03 11:24:57 INFO web[jruby.rack] jruby 1.7.9 (ruby-1.8.7p370) 2013-12-06 87b108a on Java HotSpot(TM) 64-Bit Server VM 1.8.0_65-b17 [Windows 7-amd64]
2019.07.03 11:24:57 INFO web[jruby.rack] using a shared (threadsafe!) runtime
2019.07.03 11:25:09 INFO web[jruby.rack] keeping custom (config.logger) Rails logger instance
2019.07.03 11:25:09 INFO web[o.s.s.p.MasterServletFilter] Initializing servlet filter org.sonar.server.authentication.InitFilter#649f4774 [pattern=/sessions/init/*]
2019.07.03 11:25:09 INFO web[o.s.s.p.MasterServletFilter] Initializing servlet filter org.sonar.server.authentication.OAuth2CallbackFilter#18a75007 [pattern=/oauth2/callback/*]
2019.07.03 11:25:09 INFO web[o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-0.0.0.0-9000"]
2019.07.03 11:25:09 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
2019.07.03 11:25:09 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2019.07.03 11:25:09 INFO app[o.s.p.m.Monitor] Process[web] is up
2019.07.03 11:25:09 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[ce]: D:\Agile\Java\jdk1.8.0_65\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=D:\Agile\NewSonarQube\sonarqube-5.6.6\temp -javaagent:D:\Agile\Java\jdk1.8.0_65\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/server/*;./lib/ce/*;D:\Agile\NewSonarQube\sonarqube-5.6.6\lib\jdbc\postgresql\postgresql-9.3-1102-jdbc41.jar org.sonar.ce.app.CeServer D:\Agile\NewSonarQube\sonarqube-5.6.6\temp\sq-process9172021716354276652properties
2019.07.03 11:25:10 INFO ce[o.s.p.ProcessEntryPoint] Starting ce
2019.07.03 11:25:10 INFO ce[o.s.ce.app.CeServer] Compute Engine starting up...
2019.07.03 11:25:10 INFO ce[o.s.s.p.ServerImpl] SonarQube Server / 5.6.6 / e8e13145497bb920921ae4fe11b09f0903f1d298
2019.07.03 11:25:10 INFO ce[o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://localhost/sonar
2019.07.03 11:25:11 INFO ce[o.s.s.p.DefaultServerFileSystem] SonarQube home: D:\Agile\NewSonarQube\sonarqube-5.6.6
2019.07.03 11:25:11 INFO ce[o.e.plugins] [sonar-1562133268619] loaded [], sites []
2019.07.03 11:25:12 INFO ce[o.s.c.c.CePluginRepository] Load plugins
2019.07.03 11:25:17 ERROR ce[o.s.ce.app.CeServer] Compute Engine startup failed
org.apache.ibatis.exceptions.PersistenceException:
### Error updating database. Cause: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "ce_activity_uuid"
Detail: Key (uuid)=(AWpKhMdKhBAmaVnZ4vza) already exists.
### The error may involve org.sonar.db.ce.CeActivityMapper.insert-Inline
### The error occurred while setting parameters
### SQL: insert into ce_activity (uuid, component_uuid, snapshot_id, status, task_type, is_last, is_last_key, submitter_login, submitted_at, started_at, executed_at, created_at, updated_at, execution_time_ms) values ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )
### Cause: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "ce_activity_uuid"
Detail: Key (uuid)=(AWpKhMdKhBAmaVnZ4vza) already exists.
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:26) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:154) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.session.defaults.DefaultSqlSession.insert(DefaultSqlSession.java:141) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:51) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:52) ~[mybatis-3.2.7.jar:3.2.7]
at com.sun.proxy.$Proxy39.insert(Unknown Source) ~[na:na]
at org.sonar.db.ce.CeActivityDao.insert(CeActivityDao.java:51) ~[sonar-db-5.6.6.jar:na]
at org.sonar.ce.queue.CeQueueImpl.remove(CeQueueImpl.java:181) ~[sonar-server-5.6.6.jar:na]
at org.sonar.ce.queue.CeQueueImpl.cancelImpl(CeQueueImpl.java:156) ~[sonar-server-5.6.6.jar:na]
at org.sonar.server.computation.queue.InternalCeQueueImpl.cancel(InternalCeQueueImpl.java:128) ~[sonar-server-5.6.6.jar:na]
at org.sonar.server.computation.queue.CeQueueCleaner.verifyConsistency(CeQueueCleaner.java:81) ~[sonar-server-5.6.6.jar:na]
at org.sonar.server.computation.queue.CeQueueCleaner.clean(CeQueueCleaner.java:59) ~[sonar-server-5.6.6.jar:na]
at org.sonar.server.computation.queue.CeQueueInitializer.initCe(CeQueueInitializer.java:59) ~[sonar-server-5.6.6.jar:na]
at org.sonar.server.computation.queue.CeQueueInitializer.onServerStart(CeQueueInitializer.java:51) ~[sonar-server-5.6.6.jar:na]
at org.sonar.server.platform.ServerLifecycleNotifier.notifyStart(ServerLifecycleNotifier.java:67) ~[sonar-server-5.6.6.jar:na]
at org.sonar.ce.container.ComputeEngineContainerImpl.startupTasks(ComputeEngineContainerImpl.java:631) ~[sonar-ce-5.6.6.jar:na]
at org.sonar.ce.container.ComputeEngineContainerImpl.start(ComputeEngineContainerImpl.java:619) ~[sonar-ce-5.6.6.jar:na]
at org.sonar.ce.ComputeEngineImpl.startup(ComputeEngineImpl.java:43) ~[sonar-ce-5.6.6.jar:na]
at org.sonar.ce.app.CeServer$CeMainThread.startup(CeServer.java:175) [sonar-ce-5.6.6.jar:na]
at org.sonar.ce.app.CeServer$CeMainThread.attemptStartup(CeServer.java:165) [sonar-ce-5.6.6.jar:na]
at org.sonar.ce.app.CeServer$CeMainThread.run(CeServer.java:153) [sonar-ce-5.6.6.jar:na]
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "ce_activity_uuid"
Detail: Key (uuid)=(AWpKhMdKhBAmaVnZ4vza) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2198) ~[postgresql-9.3-1102-jdbc41.jar:na]
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1927) ~[postgresql-9.3-1102-jdbc41.jar:na]
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) ~[postgresql-9.3-1102-jdbc41.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:561) ~[postgresql-9.3-1102-jdbc41.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:419) ~[postgresql-9.3-1102-jdbc41.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:412) ~[postgresql-9.3-1102-jdbc41.jar:na]
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.ibatis.executor.statement.PreparedStatementHandler.update(PreparedStatementHandler.java:44) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.executor.statement.RoutingStatementHandler.update(RoutingStatementHandler.java:69) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.executor.ReuseExecutor.doUpdate(ReuseExecutor.java:50) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:105) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:71) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:152) ~[mybatis-3.2.7.jar:3.2.7]
... 19 common frames omitted
2019.07.03 11:25:17 INFO app[o.s.p.m.Monitor] Process[ce] is up
2019.07.03 11:25:17 INFO app[o.s.p.m.Monitor] Process[web] is stopping
2019.07.03 11:25:17 INFO web[o.s.p.StopWatcher] Stopping process
2019.07.03 11:25:17 INFO web[o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2019.07.03 11:25:18 INFO web[o.s.s.n.NotificationService] Notification service stopped
2019.07.03 11:25:18 INFO web[o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-0.0.0.0-9000"]
2019.07.03 11:25:19 INFO web[o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-0.0.0.0-9000"]
2019.07.03 11:25:19 INFO web[o.s.s.a.TomcatAccessLog] Web server is stopped
2019.07.03 11:25:20 INFO app[o.s.p.m.Monitor] Process[web] is stopped
2019.07.03 11:25:20 INFO app[o.s.p.m.Monitor] Process[es] is stopping
2019.07.03 11:25:20 INFO es[o.s.p.StopWatcher] Stopping process
2019.07.03 11:25:20 INFO es[o.elasticsearch.node] [sonar-1562133268619] stopping ...
2019.07.03 11:25:20 INFO es[o.elasticsearch.node] [sonar-1562133268619] stopped
2019.07.03 11:25:20 INFO es[o.elasticsearch.node] [sonar-1562133268619] closing ...
2019.07.03 11:25:20 INFO es[o.elasticsearch.node] [sonar-1562133268619] closed
2019.07.03 11:25:21 INFO app[o.s.p.m.Monitor] Process[es] is stopped
<-- Wrapper Stopped
From the above log ERROR is
ERROR: duplicate key value violates unique constraint "ce_activity_uuid".
Any leads for the error..??

Unable to start Sonarqube

I tried to install sonarqube on Windows Server 2016. No problem during the installation and I even got access to the web interface. Everything is working just fine.
But, I had to restart my server. Now Sonarqube refuses to start again.
Here the sonar.log :
--> Wrapper Started as Console
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.05.25 15:51:26 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory C:\sonarqube\sonarqube-6.7.3\temp
2018.05.25 15:51:26 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.05.25 15:51:26 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [C:\sonarqube\sonarqube-6.7.3\elasticsearch]: C:\Program Files\Java\jdk1.8.0_171\jre\bin\java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -Xms2048m -Xmx2048m -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.path.home=C:\sonarqube\sonarqube-6.7.3\elasticsearch -cp lib/* org.elasticsearch.bootstrap.Elasticsearch -Epath.conf=C:\sonarqube\sonarqube-6.7.3\temp\conf\es
2018.05.25 15:51:26 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.05.25 15:51:27 INFO app[][o.e.p.PluginsService] no modules loaded
2018.05.25 15:51:27 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.05.25 15:51:31 DEBUG app[][i.netty.util.NetUtil] Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2018.05.25 15:51:34 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{rUfN9MfRR1yx1epNtNzu6A}{127.0.0.1}{127.0.0.1:9001}], ignoring...
2018.05.25 15:51:34 DEBUG app[][o.s.a.p.EsProcessMonitor] Connected to Elasticsearch node: [127.0.0.1:9001]
2018.05.25 15:51:36 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{rUfN9MfRR1yx1epNtNzu6A}{127.0.0.1}{127.0.0.1:9001}], ignoring...
2018.05.25 15:51:41 DEBUG app[][o.e.t.n.Netty4Transport] connected to node [{sonarqube}{CFUKuUDmTQ2UsgzJtMo9sA}{-uBZwbFUTHyRUFcE7jQ78w}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}]
2018.05.25 15:51:48 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.05.25 15:51:48 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [C:\sonarqube\sonarqube-6.7.3]: C:\Program Files\Java\jdk1.8.0_171\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=C:\sonarqube\sonarqube-6.7.3\temp -Xmx2048m -Xms2048m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*;./lib/server/*;C:\sonarqube\sonarqube-6.7.3\lib\jdbc\mssql\mssql-jdbc-6.2.2.jre8.jar org.sonar.server.app.WebServer C:\sonarqube\sonarqube-6.7.3\temp\sq-process5165540938815359073properties
2018.05.25 15:52:10 DEBUG app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [web]: 0
2018.05.25 15:52:10 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.05.25 15:52:10 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.05.25 15:52:10 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
the web.log :
2018.05.25 15:51:48 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2018.05.25 15:51:49 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2018.05.25 15:51:51 INFO web[][o.e.p.PluginsService] no modules loaded
2018.05.25 15:51:51 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
2018.05.25 15:51:51 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2018.05.25 15:51:51 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2018.05.25 15:51:51 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.05.25 15:51:52 DEBUG web[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 2
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] Platform: Windows
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] Java version: 8
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] direct buffer constructor: available
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): available
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: C:\sonarqube\sonarqube-6.7.3\temp (java.io.tmpdir)
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: 2075918336 bytes
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.CleanerJava6] java.nio.ByteBuffer.cleaner(): available
2018.05.25 15:51:52 DEBUG web[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2018.05.25 15:51:52 DEBUG web[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2018.05.25 15:51:52 DEBUG web[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2018.05.25 15:51:52 DEBUG web[][i.n.c.DefaultChannelId] -Dio.netty.processId: 5492 (auto-detected)
2018.05.25 15:51:52 DEBUG web[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2018.05.25 15:51:52 DEBUG web[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2018.05.25 15:51:52 DEBUG web[][i.netty.util.NetUtil] Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
2018.05.25 15:51:52 DEBUG web[][i.netty.util.NetUtil] Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2018.05.25 15:51:52 DEBUG web[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 00:15:5d:ff:fe:01:ab:03 (auto-detected)
2018.05.25 15:51:52 DEBUG web[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2018.05.25 15:51:52 DEBUG web[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 2
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 2
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2018.05.25 15:51:52 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2018.05.25 15:51:52 DEBUG web[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2018.05.25 15:51:52 DEBUG web[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 65536
2018.05.25 15:51:52 DEBUG web[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2018.05.25 15:51:52 DEBUG web[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.bytebuf.checkAccessible: true
2018.05.25 15:51:52 DEBUG web[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#497e6350
2018.05.25 15:51:52 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.maxCapacityPerThread: 32768
2018.05.25 15:51:52 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: 2
2018.05.25 15:51:52 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.linkCapacity: 16
2018.05.25 15:51:52 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.ratio: 8
2018.05.25 15:51:53 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2018.05.25 15:51:53 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.7.3.38370 / baa823c2ae6ed4dd406b2ef12bc5ac6201e466f9
2018.05.25 15:51:53 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:sqlserver://localhost;databaseName=sonar
2018.05.25 15:52:09 ERROR web[][o.s.s.p.Platform] Web server startup failed
I've tried to change a lots of settings in .conf of sonar and wrapper without any success. Can anyone help ? I can provide more info if needed.
Thanks a lot!

Spark : Execute python script with Spark based on Hadoop Multinode

I'm looking for use Spark based on Hadoop Multinodes and I have a question about my pythonic script with cluster mode.
My Configuration :
I have into my Hadoop Cluster :
1 Namenode (master)
2 Datanodes (slaves)
So I would like to execute my script in Python in order to use this cluster. I know that Spark could be used as standalone mode, but I would like to use my nodes.
My python script :
It's a very simple script which let to count words in my text.
import sys
from pyspark import SparkContext
sc = SparkContext()
lines = sc.textFile(sys.argv[1])
words = lines.flatMap(lambda line: line.split(' '))
words_with_1 = words.map(lambda word: (word, 1))
word_counts = words_with_1.reduceByKey(lambda count1, count2: count1 + count2)
result = word_counts.collect()
for (word, count) in result:
print word.encode("utf8"), count
My Spark command :
In order to use Spark, I do :
time ./bin/spark-submit --master spark://master:7077 /home/hduser/count.py /data.txt
But, this command lets to execute Spark in standalone mode right ?
How I can execute Spark using my Hadoop Cluster (e.g yarn) and make parallel and distributed computing on my cluster ?
I tried :
time ./bin/spark-submit --master yarn /home/hduser/count.py /data.txt
And I get issues :
2018-03-15 10:13:14 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-15 10:13:15 INFO SparkContext:54 - Running Spark version 2.3.0
2018-03-15 10:13:15 INFO SparkContext:54 - Submitted application: count.py
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing view acls to: hduser
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing modify acls to: hduser
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing view acls groups to:
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing modify acls groups to:
2018-03-15 10:13:15 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); groups with view permissions: Set(); users with modify permissions: Set(hduser)$
2018-03-15 10:13:16 INFO Utils:54 - Successfully started service 'sparkDriver' on port 40388.
2018-03-15 10:13:16 INFO SparkEnv:54 - Registering MapOutputTracker
2018-03-15 10:13:16 INFO SparkEnv:54 - Registering BlockManagerMaster
2018-03-15 10:13:16 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-03-15 10:13:16 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up
2018-03-15 10:13:16 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-b131528e-849e-4ba7-94fe-c552572f12fc
2018-03-15 10:13:16 INFO MemoryStore:54 - MemoryStore started with capacity 413.9 MB
2018-03-15 10:13:16 INFO SparkEnv:54 - Registering OutputCommitCoordinator
2018-03-15 10:13:17 INFO log:192 - Logging initialized #5400ms
2018-03-15 10:13:17 INFO Server:346 - jetty-9.3.z-SNAPSHOT
2018-03-15 10:13:17 INFO Server:414 - Started #5667ms
2018-03-15 10:13:17 INFO AbstractConnector:278 - Started ServerConnector#4f835332{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-15 10:13:17 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040.
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#2f867b0c{/jobs,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#2a0105b7{/jobs/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#3fd04590{/jobs/job,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#2637750b{/jobs/job/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#439f0c7{/stages,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#3978d915{/stages/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#596dc76d{/stages/stage,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#7054d173{/stages/stage/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#47b526bb{/stages/pool,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#7896fc75{/stages/pool/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#2fd54632{/storage,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#79dcd5f2{/storage/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#1732b48c{/storage/rdd,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5888874b{/storage/rdd/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5de9bebe{/environment,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#428593b4{/environment/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4011c9bc{/executors,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5cbfbc2a{/executors/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4c33f54d{/executors/threadDump,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#22c5d74c{/executors/threadDump/json,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#6cd7b681{/static,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5ee342f2{/,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4d68a347{/api,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#1e878af1{/jobs/job/kill,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#590aa379{/stages/stage/kill,null,AVAILABLE,#Spark}
2018-03-15 10:13:17 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://master:4040
2018-03-15 10:13:19 INFO RMProxy:98 - Connecting to ResourceManager at master/172.30.10.64:8050
2018-03-15 10:13:20 INFO Client:54 - Requesting a new application from cluster with 3 NodeManagers
2018-03-15 10:13:20 INFO Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
2018-03-15 10:13:20 INFO Client:54 - Will allocate AM container, with 896 MB memory including 384 MB overhead
2018-03-15 10:13:20 INFO Client:54 - Setting up container launch context for our AM
2018-03-15 10:13:20 INFO Client:54 - Setting up the launch environment for our AM container
2018-03-15 10:13:20 INFO Client:54 - Preparing resources for our AM container
2018-03-15 10:13:24 WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2018-03-15 10:13:29 INFO Client:54 - Uploading resource file:/tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606/__spark_libs__580552500091841387.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/__s$
2018-03-15 10:13:33 INFO Client:54 - Uploading resource file:/usr/local/spark/python/lib/pyspark.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/pyspark.zip
2018-03-15 10:13:33 INFO Client:54 - Uploading resource file:/usr/local/spark/python/lib/py4j-0.10.6-src.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/py4j-0.10.6-src.zip
2018-03-15 10:13:34 INFO Client:54 - Uploading resource file:/tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606/__spark_conf__7840630163677580304.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/__$
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing view acls to: hduser
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing modify acls to: hduser
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing view acls groups to:
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing modify acls groups to:
2018-03-15 10:13:34 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); groups with view permissions: Set(); users with modify permissions: Set(hduser)$
2018-03-15 10:13:34 INFO Client:54 - Submitting application application_1521023754917_0007 to ResourceManager
2018-03-15 10:13:34 INFO YarnClientImpl:251 - Submitted application application_1521023754917_0007
2018-03-15 10:13:34 INFO SchedulerExtensionServices:54 - Starting Yarn extension services with app application_1521023754917_0007 and attemptId None
2018-03-15 10:13:35 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:35 INFO Client:54 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1521105214408
final status: UNDEFINED
tracking URL: http://master:8088/proxy/application_1521023754917_0007/
user: hduser
2018-03-15 10:13:36 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:37 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:38 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:39 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:40 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:41 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:42 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:43 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:44 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:45 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:46 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:47 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:48 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:49 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:50 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:51 INFO Client:54 - Application report for application_1521023754917_0007 (state: FAILED)
2018-03-15 10:13:51 INFO Client:54 -
client token: N/A
diagnostics: Application application_1521023754917_0007 failed 2 times due to AM Container for appattempt_1521023754917_0007_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1521023754917_0007Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9363,containerID=container_1521023754917_0007_02_000001] is running beyond virtual memory limits. Current usage: 147.7 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing cont$
Dump of the process-tree for container_1521023754917_0007_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9369 9363 9363 9363 (java) 454 16 2250776576 37073 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/co$
|- 9363 9361 9363 9363 (bash) 0 0 12869632 742 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0$
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1521105214408
final status: FAILED
tracking URL: http://master:8088/cluster/app/application_1521023754917_0007
user: hduser
2018-03-15 10:13:51 INFO Client:54 - Deleted staging directory hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007
2018-03-15 10:13:51 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
2018-03-15 10:13:51 INFO AbstractConnector:318 - Stopped Spark#4f835332{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-15 10:13:51 INFO SparkUI:54 - Stopped Spark web UI at http://master:4040
2018-03-15 10:13:51 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2018-03-15 10:13:51 INFO YarnClientSchedulerBackend:54 - Shutting down all executors
2018-03-15 10:13:51 INFO YarnSchedulerBackend$YarnDriverEndpoint:54 - Asking each executor to shut down
2018-03-15 10:13:51 INFO SchedulerExtensionServices:54 - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
2018-03-15 10:13:51 INFO YarnClientSchedulerBackend:54 - Stopped
2018-03-15 10:13:51 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-03-15 10:13:51 INFO MemoryStore:54 - MemoryStore cleared
2018-03-15 10:13:51 INFO BlockManager:54 - BlockManager stopped
2018-03-15 10:13:51 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2018-03-15 10:13:51 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-03-15 10:13:51 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-03-15 10:13:52 INFO SparkContext:54 - Successfully stopped SparkContext
Traceback (most recent call last):
File "/home/hduser/count.py", line 4, in <module>
sc = SparkContext()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 118, in __init__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 180, in _do_init
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 270, in _initialize_context
File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1428, in __call__
File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
2018-03-15 10:13:52 INFO ShutdownHookManager:54 - Shutdown hook called
2018-03-15 10:13:52 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606
2018-03-15 10:13:52 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-f5d31d54-e456-4fcb-bf48-9f950233ad4b
I'm getting all the time FAILED when I want to use my cluster with Spark
Finally I tried :
time ./bin/spark-submit --master yarn --deploy-mode cluster /home/hduser/count.py /data.txt
But I get one more time issues.
I don't understand something ? I'm very new with Big Data so it's possible :/
EDIT :
This is what I obtain with : yarn application -status application_1521023754917_0007
18/03/15 10:52:07 INFO client.RMProxy: Connecting to ResourceManager at master/172.30.10.64:8050
18/03/15 10:52:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Application Report :
Application-Id : application_1521023754917_0007
Application-Name : count.py
Application-Type : SPARK
User : hduser
Queue : default
Start-Time : 1521105214408
Finish-Time : 1521105231067
Progress : 0%
State : FAILED
Final-State : FAILED
Tracking-URL : http://master:8088/cluster/app/application_1521023754917_0007
RPC Port : -1
AM Host : N/A
Aggregate Resource Allocation : 16329 MB-seconds, 15 vcore-seconds
Diagnostics : Application application_1521023754917_0007 failed 2 times due to AM Container for appattempt_1521023754917_0007_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1521023754917_0007Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9363,containerID=container_1521023754917_0007_02_000001] is running beyond virtual memory limits. Current usage: 147.7 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1521023754917_0007_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9369 9363 9363 9363 (java) 454 16 2250776576 37073 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/tmp -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg master:40388 --properties-file /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/__spark_conf__/__spark_conf__.properties
|- 9363 9361 9363 9363 (bash) 0 0 12869632 742 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/tmp -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'master:40388' --properties-file /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/__spark_conf__/__spark_conf__.properties 1> /usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001/stdout 2> /usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
for me this spark submit run the python on all spark nodes:
spark-submit --master yarn
--deploy-mode cluster
--num-executors 1
--driver-memory 2g
--executor-memory 1g
--executor-cores 1
hdfs://<host>:<port>/home/hduser/count.py /data.txt
The Spark environment need to be extended with:
export PYSPARK_PYTHON=/opt/bin/python
Furthermore the py file need to be located on hdfs so that all spark nodes in the cluster can read it. The py file need to be accessible for the spark user.
you have to first locate the py scrip in the HDFS location. With the name node correct URL
like hdfs dfs -ls hdfs://hostname:1543/
if you see your file is reflecting in the screen then this is the correct path.
Next execute
/bin/spark-submit --master yarn hdfs://COMPLETEHOSTNAME:1543/count.py /data.txt
it will surely work.

Deserialisation error on worker in standalone Spark cluster

I have a spark application that works fine on a standalone Spark cluster that runs ok when the Spark cluster runs on my laptop
(master and one worker), but fails when I try to run in on a standalone Spark cluster
deployed on EC2 (master and worker are on different machines).
The application structure goes in the following way:
There is a java process ('message processor') that runs on the same machine as
Spark master. When it starts, it submits itself to Spark master, then,
it listens on SQS and on each received message, it should run a spark job to process a file from S3, which address is configured in the message .
It looks like all this fails at the point where the Spark driver tries to send the job
to the Spark executer.
Below is the code from the 'message processor' that configures the SparkContext,
Then the Spark driver log, and then the Spark executor log.
The outputs of my code and some important points are marked in bold and
I've simplified the code and logs in some places for the sake of readability.
Would appreciate your help very much, because I've run out of ideas with this problem.
'message processor' code:
logger.info("Started Integration Hub SubmitDriver in test mode.");
SparkConf sparkConf = new SparkConf()
.setMaster(SPARK_MASTER_URI)
.setAppName(APPLICATION_NAME)
.setSparkHome(SPARK_LOCATION_ON_EC2_MACHINE);
sparkConf.setJars(JavaSparkContext.jarOfClass(this.getClass()));
// configure spark executor to use log4j properties located in the local spark conf dir
sparkConf.set("spark.executor.extraJavaOptions", "-XX:+UseConcMarkSweepGC -Dlog4j.configuration=log4j_integrationhub_sparkexecutor.properties");
sparkConf.set("spark.executor.memory", "1g");
sparkConf.set("spark.cores.max", "3");
// Spill shuffle to disk to avoid OutOfMemory, at cost of reduced performance
sparkConf.set("spark.shuffle.spill", "true");
logger.info("Connecting Spark");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
sc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", AWS_KEY);
sc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", AWS_SECRET);
logger.info("Spark connected");
Driver log:
2015-05-01 07:47:14 INFO ClassPathBeanDefinitionScanner:239 - JSR-330 'javax.inject.Named' annotation found and supported for component scanning
2015-05-01 07:47:14 INFO AnnotationConfigApplicationContext:510 - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#5540b23b: startup date [Fri May 01 07:47:14 UTC 2015]; root of context hierarchy
2015-05-01 07:47:14 INFO AutowiredAnnotationBeanPostProcessor:140 - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2015-05-01 07:47:14 INFO DefaultListableBeanFactory:596 - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#13f948e: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,integrationHubConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,processorInlineDriver,s3Accessor,cdFetchUtil,httpUtil,cdPushUtil,submitDriver,databaseLogger,connectorUtil,totangoDataValidations,environmentConfig,sesUtil,processorExecutor,processorDriver]; root of factory hierarchy
2015-05-01 07:47:15 INFO SubmitDriver:69 - Started Integration Hub SubmitDriver in test mode.
2015-05-01 07:47:15 INFO SubmitDriver:101 - Connecting Spark
2015-05-01 07:47:15 INFO SparkContext:59 - Running Spark version 1.3.0
2015-05-01 07:47:16 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-01 07:47:16 INFO SecurityManager:59 - Changing view acls to: hadoop
2015-05-01 07:47:16 INFO SecurityManager:59 - Changing modify acls to: hadoop
2015-05-01 07:47:16 INFO SecurityManager:59 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
2015-05-01 07:47:18 INFO Slf4jLogger:80 - Slf4jLogger started
2015-05-01 07:47:18 INFO Remoting:74 - Starting remoting
2015-05-01 07:47:18 INFO Remoting:74 - Remoting started; listening on addresses :[akka.tcp://sparkDriver#sparkMasterIp:39176]
2015-05-01 07:47:18 INFO Utils:59 - Successfully started service 'sparkDriver' on port 39176.
2015-05-01 07:47:18 INFO SparkEnv:59 - Registering MapOutputTracker
2015-05-01 07:47:18 INFO SparkEnv:59 - Registering BlockManagerMaster
2015-05-01 07:47:18 INFO HttpFileServer:59 - HTTP File server directory is /tmp/spark-e4726219-5708-48c9-8377-c103ad1e7a75/httpd-fe68500f-01b1-4241-a3a2-3b4cf8394daf
2015-05-01 07:47:18 INFO HttpServer:59 - Starting HTTP Server
2015-05-01 07:47:19 INFO Server:272 - jetty-8.y.z-SNAPSHOT
2015-05-01 07:47:19 INFO AbstractConnector:338 - Started SocketConnector#0.0.0.0:47166
2015-05-01 07:47:19 INFO Utils:59 - Successfully started service 'HTTP file server' on port 47166.
2015-05-01 07:47:19 INFO SparkEnv:59 - Registering OutputCommitCoordinator
2015-05-01 07:47:24 INFO Server:272 - jetty-8.y.z-SNAPSHOT
2015-05-01 07:47:24 INFO AbstractConnector:338 - Started SelectChannelConnector#0.0.0.0:4040
2015-05-01 07:47:24 INFO Utils:59 - Successfully started service 'SparkUI' on port 4040.
2015-05-01 07:47:24 INFO SparkUI:59 - Started SparkUI at http://sparkMasterIp:4040
2015-05-01 07:47:24 INFO SparkContext:59 - Added JAR /rev/8fcc3a5/integhub_be/genconn/lib/genconn-8fcc3a5.jar at http://sparkMasterIp:47166/jars/genconn-8fcc3a5.jar with timestamp 1430466444838
2015-05-01 07:47:24 INFO AppClient$ClientActor:59 - Connecting to master akka.tcp://sparkMaster#sparkMasterIp:7077/user/Master...
2015-05-01 07:47:25 INFO AppClient$ClientActor:59 - Executor added: app-20150501074725-0005/0 on worker-20150430140019-ip-sparkWorkerIp-38610 (sparkWorkerIp:38610) with 1 cores
2015-05-01 07:47:25 INFO AppClient$ClientActor:59 - Executor updated: app-20150501074725-0005/0 is now LOADING
2015-05-01 07:47:25 INFO AppClient$ClientActor:59 - Executor updated: app-20150501074725-0005/0 is now RUNNING
2015-05-01 07:47:25 INFO NettyBlockTransferService:59 - Server created on 34024
2015-05-01 07:47:26 INFO SubmitDriver:116 - Spark connected
2015-05-01 07:47:26 INFO SubmitDriver:125 - Connected to SQS... Listening on https://sqsAddress
2015-05-01 07:51:39 INFO SubmitDriver:130 - Polling Message queue...
2015-05-01 07:51:47 INFO SubmitDriver:148 - Received Message : {someMessage}
2015-05-01 07:51:47 INFO SubmitDriver:158 - Process Input JSON
2015-05-01 07:51:50 INFO SparkContext:59 - Created broadcast 0 from textFile at ProcessorDriver.java:208
2015-05-01 07:51:52 INFO FileInputFormat:253 - Total input paths to process : 1
2015-05-01 07:51:52 INFO SparkContext:59 - Starting job: first at ConnectorUtil.java:605
2015-05-01 07:51:52 INFO SparkContext:59 - Created broadcast 1 from broadcast at DAGScheduler.scala:839
2015-05-01 07:51:52 WARN TaskSetManager:71 - ... *the warning will be repeated as error below*
2015-05-01 07:51:52 ERROR TaskSetManager:75 - Task 0 in stage 0.0 failed 4 times; aborting job
2015-05-01 07:51:52 ERROR ProcessorDriver:261 - Error executing the batch Operation..
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, sparkWorkerIp): java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
at org.apache.hadoop.io.UTF8.readChars(UTF8.java:216)
at org.apache.hadoop.io.UTF8.readString(UTF8.java:208)
at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
at org.apache.spark.SerializableWritable$$anonfun$readObject$1.apply$mcV$sp(SerializableWritable.scala:43)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1137)
at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace: ...
Worker log:
2015-05-01 07:47:26 INFO CoarseGrainedExecutorBackend:47 - Registered signal handlers for [TERM, HUP, INT]
2015-05-01 07:47:26 DEBUG Configuration:227 - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)
at org.apache.spark.deploy.SparkHadoopUtil.newConfiguration(SparkHadoopUtil.scala:78)
at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:43)
at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:220)
at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:128)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:224)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
2015-05-01 07:47:26 DEBUG Groups:139 - Creating new Groups object
2015-05-01 07:47:27 DEBUG Groups:59 - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2015-05-01 07:47:27 DEBUG Configuration:227 - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:184)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:236)
at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:79)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:226)
at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:44)
at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:220)
at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:128)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:224)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
2015-05-01 07:47:27 DEBUG SparkHadoopUtil:63 - running as user: hadoop
2015-05-01 07:47:27 DEBUG UserGroupInformation:146 - hadoop login
2015-05-01 07:47:27 DEBUG UserGroupInformation:95 - hadoop login commit
2015-05-01 07:47:27 DEBUG UserGroupInformation:125 - using local user:UnixPrincipal: root
2015-05-01 07:47:27 DEBUG UserGroupInformation:493 - UGI loginUser:root
2015-05-01 07:47:27 DEBUG UserGroupInformation:1143 - PriviledgedAction as:hadoop from:org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
2015-05-01 07:47:27 INFO SecurityManager:59 - Changing view acls to: root,hadoop
2015-05-01 07:47:27 INFO SecurityManager:59 - Changing modify acls to: root,hadoop
2015-05-01 07:47:27 INFO SecurityManager:59 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, hadoop); users with modify permissions: Set(root, hadoop)
2015-05-01 07:47:27 DEBUG SecurityManager:63 - SSLConfiguration for file server: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
2015-05-01 07:47:27 DEBUG SecurityManager:63 - SSLConfiguration for Akka: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
2015-05-01 07:47:27 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-05-01 07:47:28 INFO Slf4jLogger:80 - Slf4jLogger started
2015-05-01 07:47:28 INFO Remoting:74 - Starting remoting
2015-05-01 07:47:29 INFO Remoting:74 - Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#sparkWorkerIp:49741]
2015-05-01 07:47:29 INFO Utils:59 - Successfully started service 'driverPropsFetcher' on port 49741.
2015-05-01 07:47:29 INFO RemoteActorRefProvider$RemotingTerminator:74 - Shutting down remote daemon.
2015-05-01 07:47:29 INFO RemoteActorRefProvider$RemotingTerminator:74 - Remote daemon shut down; proceeding with flushing remote transports.
2015-05-01 07:47:29 INFO SecurityManager:59 - Changing view acls to: root,hadoop
2015-05-01 07:47:29 INFO SecurityManager:59 - Changing modify acls to: root,hadoop
2015-05-01 07:47:29 INFO SecurityManager:59 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, hadoop); users with modify permissions: Set(root, hadoop)
2015-05-01 07:47:29 DEBUG SecurityManager:63 - SSLConfiguration for file server: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
2015-05-01 07:47:29 DEBUG SecurityManager:63 - SSLConfiguration for Akka: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
2015-05-01 07:47:29 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-05-01 07:47:29 INFO RemoteActorRefProvider$RemotingTerminator:74 - Remoting shut down.
2015-05-01 07:47:29 INFO Slf4jLogger:80 - Slf4jLogger started
2015-05-01 07:47:29 INFO Remoting:74 - Starting remoting
2015-05-01 07:47:29 INFO Remoting:74 - Remoting started; listening on addresses :[akka.tcp://sparkExecutor# sparkWorkerIp:45299]
2015-05-01 07:47:29 INFO Utils:59 - Successfully started service 'sparkExecutor' on port 45299.
2015-05-01 07:47:29 DEBUG SparkEnv:63 - Using serializer: class org.apache.spark.serializer.JavaSerializer
2015-05-01 07:47:29 INFO AkkaUtils:59 - Connecting to MapOutputTracker: akka.tcp://sparkDriver# sparkMasterIp:39176/user/MapOutputTracker
2015-05-01 07:47:30 INFO AkkaUtils:59 - Connecting to BlockManagerMaster: akka.tcp://sparkDriver#sparkMasterIp:39176/user/BlockManagerMaster
2015-05-01 07:47:30 INFO DiskBlockManager:59 - Created local directory at /mnt/spark/spark-d745cbac-d1cc-47ee-9eba-e99e104732d5/spark-e3963fa3-cab6-4c69-8e78-d23246250a5d/spark-6f1a9653-86fd-401f-bf37-6eca5b6c0adf/blockmgr-ee0e9452-4111-42d0-ab5e-e66317052e4b
2015-05-01 07:47:30 INFO MemoryStore:59 - MemoryStore started with capacity 548.5 MB
2015-05-01 07:47:30 INFO AkkaUtils:59 - Connecting to OutputCommitCoordinator: akka.tcp://sparkDriver# sparkMasterIp:39176/user/OutputCommitCoordinator
2015-05-01 07:47:30 INFO CoarseGrainedExecutorBackend:59 - Connecting to driver: akka.tcp://sparkDriver# sparkMasterIp:39176/user/CoarseGrainedScheduler
2015-05-01 07:47:30 INFO WorkerWatcher:59 - Connecting to worker akka.tcp://sparkWorker#sparkWorkerIp:38610/user/Worker
2015-05-01 07:47:30 DEBUG WorkerWatcher:50 - [actor] received message Associated [akka.tcp://sparkExecutor# sparkWorkerIp:45299] -> [akka.tcp://sparkWorker# sparkWorkerIp:38610] from Actor[akka://sparkExecutor/deadLetters]
2015-05-01 07:47:30 INFO WorkerWatcher:59 - Successfully connected to akka.tcp://sparkWorker# sparkWorkerIp:38610/user/Worker
2015-05-01 07:47:30 DEBUG WorkerWatcher:56 - [actor] handled message (1.18794 ms) Associated [akka.tcp://sparkExecutor# sparkWorkerIp:45299] -> [akka.tcp://sparkWorker# sparkWorkerIp:38610] from Actor[akka://sparkExecutor/deadLetters]
2015-05-01 07:47:30 DEBUG CoarseGrainedExecutorBackend:50 - [actor] received message RegisteredExecutor from Actor[akka.tcp://sparkDriver# sparkMasterIp:39176/user/CoarseGrainedScheduler#-970636338]
2015-05-01 07:47:30 INFO CoarseGrainedExecutorBackend:59 - Successfully registered with driver
2015-05-01 07:47:30 INFO Executor:59 - Starting executor ID 0 on host sparkWorkerIp
2015-05-01 07:47:30 DEBUG InternalLoggerFactory:71 - Using SLF4J as the default logging framework
2015-05-01 07:47:30 DEBUG PlatformDependent0:76 - java.nio.Buffer.address: available
2015-05-01 07:47:30 DEBUG PlatformDependent0:76 - sun.misc.Unsafe.theUnsafe: available
2015-05-01 07:47:30 DEBUG PlatformDependent0:71 - sun.misc.Unsafe.copyMemory: available
2015-05-01 07:47:30 DEBUG PlatformDependent0:76 - java.nio.Bits.unaligned: true
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - UID: 0
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - Java version: 7
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - -Dio.netty.noUnsafe: false
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - sun.misc.Unsafe: available
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - -Dio.netty.noJavassist: false
2015-05-01 07:47:30 DEBUG PlatformDependent:71 - Javassist: unavailable
2015-05-01 07:47:30 DEBUG PlatformDependent:71 - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - -Dio.netty.bitMode: 64 (sun.arch.data.model)
2015-05-01 07:47:30 DEBUG PlatformDependent:76 - -Dio.netty.noPreferDirect: false
2015-05-01 07:47:30 DEBUG MultithreadEventLoopGroup:76 - -Dio.netty.eventLoopThreads: 2
2015-05-01 07:47:30 DEBUG NioEventLoop:76 - -Dio.netty.noKeySetOptimization: false
2015-05-01 07:47:30 DEBUG NioEventLoop:76 - -Dio.netty.selectorAutoRebuildThreshold: 512
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.numHeapArenas: 1
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.numDirectArenas: 1
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.pageSize: 8192
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.maxOrder: 11
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.chunkSize: 16777216
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.tinyCacheSize: 512
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.smallCacheSize: 256
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.normalCacheSize: 64
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2015-05-01 07:47:30 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.cacheTrimInterval: 8192
2015-05-01 07:47:30 DEBUG ThreadLocalRandom:71 - -Dio.netty.initialSeedUniquifier: 0x4ac460da6a283b82 (took 1 ms)
2015-05-01 07:47:31 DEBUG ByteBufUtil:76 - -Dio.netty.allocator.type: unpooled
2015-05-01 07:47:31 DEBUG ByteBufUtil:76 - -Dio.netty.threadLocalDirectBufferSize: 65536
2015-05-01 07:47:31 DEBUG NetUtil:86 - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%1)
2015-05-01 07:47:31 DEBUG NetUtil:81 - /proc/sys/net/core/somaxconn: 128
2015-05-01 07:47:31 DEBUG TransportServer:106 - Shuffle server started on port :46839
2015-05-01 07:47:31 INFO NettyBlockTransferService:59 - Server created on 46839
2015-05-01 07:47:31 INFO BlockManagerMaster:59 - Trying to register BlockManager
2015-05-01 07:47:31 INFO BlockManagerMaster:59 - Registered BlockManager
2015-05-01 07:47:31 INFO AkkaUtils:59 - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver# sparkMasterIp:39176/user/HeartbeatReceiver
2015-05-01 07:47:31 DEBUG CoarseGrainedExecutorBackend:56 - [actor] handled message (339.232401 ms) RegisteredExecutor from Actor[akka.tcp://sparkDriver# sparkMasterIp:39176/user/CoarseGrainedScheduler#-970636338]
2015-05-01 07:51:52 DEBUG CoarseGrainedExecutorBackend:50 - [actor] received message LaunchTask(org.apache.spark.util.SerializableBuffer#608752bf) from Actor[akka.tcp://sparkDriver# sparkMasterIp:39176/user/CoarseGrainedScheduler#-970636338]
2015-05-01 07:51:52 INFO CoarseGrainedExecutorBackend:59 - Got assigned task 0
2015-05-01 07:51:52 DEBUG CoarseGrainedExecutorBackend:56 - [actor] handled message (22.96474 ms) LaunchTask(org.apache.spark.util.SerializableBuffer#608752bf) from Actor[akka.tcp://sparkDriver# sparkMasterIp:39176/user/CoarseGrainedScheduler#-970636338]
2015-05-01 07:51:52 INFO Executor:59 - Running task 0.0 in stage 0.0 (TID 0)
2015-05-01 07:51:52 INFO Executor:59 - Fetching http://sparkMasterIp:47166/jars/genconn-8fcc3a5.jar with timestamp 1430466444838
2015-05-01 07:51:52 DEBUG Configuration:227 - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)
at org.apache.spark.deploy.SparkHadoopUtil.newConfiguration(SparkHadoopUtil.scala:78)
at org.apache.spark.executor.Executor.hadoopConf$lzycompute$1(Executor.scala:356)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$hadoopConf$1(Executor.scala:356)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:375)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:366)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:366)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:184)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-05-01 07:51:52 DEBUG Utils:63 - fetchFile not using security
2015-05-01 07:51:52 INFO Utils:59 - Fetching http://sparkMasterIp:47166/jars/genconn-8fcc3a5.jar to /mnt/spark/spark-d745cbac-d1cc-47ee-9eba-e99e104732d5/spark-e3963fa3-cab6-4c69-8e78-d23246250a5d/spark-0eabace1-ee89-48a3-9a71-0218f0ffc61c/fetchFileTemp2001054150131059247.tmp
2015-05-01 07:51:52 INFO Utils:59 - Copying /mnt/spark/spark-d745cbac-d1cc-47ee-9eba-e99e104732d5/spark-e3963fa3-cab6-4c69-8e78-d23246250a5d/spark-0eabace1-ee89-48a3-9a71-0218f0ffc61c/18615094621430466444838_cache to /mnt/spark-work/app-20150501074725-0005/0/./genconn-8fcc3a5.jar
2015-05-01 07:51:52 INFO Executor:59 - Adding file:/mnt/spark-work/app-20150501074725-0005/0/./genconn-8fcc3a5.jar to class loader
2015-05-01 07:51:52 DEBUG Configuration:227 - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)
at org.apache.spark.SerializableWritable$$anonfun$readObject$1.apply$mcV$sp(SerializableWritable.scala:42)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1137)
at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-05-01 07:51:52 ERROR Executor:96 - Exception in task 0.0 in stage 0.0 (TID 0)
*the error that is printed in the driver log*
...

Sonar installed & started but not accessible

Summary:
Installed standalone sonar
Started through /etc/init.d/sonar start
sonar.log doesn't show any error
unable to open http://mymachine.ipaddress:9000/sonar
GET http://mymachine.ipaddress:9000/sonar net::ERR_CONNECTION_TIMED_OUT
I am trying to run a standalone sonar on a remote machine running CentOS 6.5.
Following: http://sonar-pkg.sourceforge.net/ - installed through:
yum install sonar
I've set up a remote database connection to MySQL instance, already set up with the 'sonar' database and user as described in: http://docs.sonarqube.org/display/SONAR/Installing
Modified sonar.properties configuration:
sonar.jdbc.url=jdbc:mysql://<mysql.ipaddress>:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
Initially - I only changed the jdbc connection, sonar started fine but I am unable to open sonar web http://mymachine.ipaddress:9000/
Based on:
Sonar can be accessed locally but not accessed elsewhere
Unable to open remotely installed sonar on a browser
I updated more properties:
sonar.web.host=<mymachine.ipaddress>
sonar.web.context=/sonar
sonar.web.port=9000
and run sonar again
/etc/init.d/sonar start
Sonar started fine, but still unable to access it through browser.
Going to http://mymachine.ipaddress:9000/sonar gives: "This webpage is not available"
GET http://mymachine.ipaddress:9000/sonar net::ERR_CONNECTION_TIMED_OUT
sonar.log:
--> Wrapper Started as Daemon Launching a JVM... Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2014.10.01 14:30:18 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[search]:
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre/bin/java -Xmx256m
-Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Djava.io.tmpdir=/opt/sonar/temp -cp ./lib/common/:./lib/search/ org.sonar.search.SearchServer
/tmp/sq-process2719628111769939329properties
2014.10.01 14:30:18 WARN sea[o.s.p.ProcessEntryPoint] Starting search
2014.10.01 14:30:18 INFO sea[o.s.s.SearchServer] Starting ES[sonarqube] on port: 9001
2014.10.01 14:30:18 INFO sea[o.elasticsearch.node] [sonar-1412188217506] version[1.1.2], pid[329],
build[e511f7b/2014-05-22T12:27:39Z]
2014.10.01 14:30:18 INFO sea[o.elasticsearch.node] [sonar-1412188217506] initializing ...
2014.10.01 14:30:18 INFO sea[o.e.plugins] [sonar-1412188217506] loaded [], sites []
2014.10.01 14:30:20 INFO sea[o.elasticsearch.node] [sonar-1412188217506] initialized
2014.10.01 14:30:20 INFO sea[o.elasticsearch.node] [sonar-1412188217506] starting ...
2014.10.01 14:30:20 INFO sea[o.e.transport] [sonar-1412188217506] bound_address {inet[/0.0.0.0:9001]}, publish_address
{inet[/:9001]}
2014.10.01 14:30:23 INFO sea[o.e.cluster.service] [sonar-1412188217506] new_master
[sonar-1412188217506][k85f1MFxQgSy_aXegApX1g][CORPSTGCI01][inet[/:9001]]{rack_id=sonar-1412188217506},
reason: zen-disco-join (elected_as_master)
2014.10.01 14:30:23 INFO sea[o.e.discovery] [sonar-1412188217506] sonarqube/k85f1MFxQgSy_aXegApX1g
2014.10.01 14:30:25 INFO sea[o.e.gateway] [sonar-1412188217506] recovered [2] indices into cluster_state
2014.10.01 14:30:25 INFO sea[o.elasticsearch.node] [sonar-1412188217506] started
2014.10.01 14:30:31 INFO app[o.s.p.m.Monitor] Process[search] is up
2014.10.01 14:30:31 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[web]:
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre/bin/java -Xmx768m
-XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djava.io.tmpdir=/opt/sonar/temp -cp ./lib/common/:./lib/server/:/opt/sonar/lib/jdbc/mysql/mysql-connector-java-5.1.27.jar
org.sonar.server.app.WebServer
/tmp/sq-process7956510077635803069properties
2014.10.01 14:30:32 WARN web[o.s.p.ProcessEntryPoint] Starting web
2014.10.01 14:30:32 INFO web[o.s.s.app.Connectors] HTTP connector is enabled on port 9000
2014.10.01 14:30:32 INFO web[o.s.s.app.Webapp] Webapp directory: /opt/sonar/web
2014.10.01 14:30:33 INFO web[o.e.plugins] [sonar-1412188217506] loaded [], sites []
2014.10.01 14:30:34 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 4.5 / c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:34 INFO web[o.s.c.p.Database] Create JDBC datasource for
jdbc:mysql://10.0.30.204:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
2014.10.01 14:30:36 INFO web[o.s.s.p.DefaultServerFileSystem] SonarQube home: /opt/sonar
2014.10.01 14:30:36 INFO web[org.sonar.INFO] Install plugins...
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Findbugs / 2.4 /
a334be36ba4374bb779255272c53fb08675ac2c2
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Duplications / 4.5 /
c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Core / 4.5 / c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Java / 2.4 / 7e7e6335211bb9c0ff9727065f43e7010cc3df91
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Database Cleaner / 4.5 /
c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin English Pack / 4.5 /
c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Email notifications / 4.5 /
c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:36 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Design / 4.5 / c8bb686cbee8e1dce3312ef253db76e7c0e3c0c7
2014.10.01 14:30:36 INFO web[org.sonar.INFO] Install plugins done: 88 ms
2014.10.01 14:30:36 INFO web[o.s.s.p.RailsAppsDeployer] Deploy Ruby on Rails applications
2014.10.01 14:30:36 INFO web[o.s.j.s.AbstractDatabaseConnector] Initializing Hibernate
2014.10.01 14:30:38 INFO web[o.s.s.p.UpdateCenterClient] Update center: http://update.sonarsource.org/update-center.properties (no
proxy)
2014.10.01 14:30:38 INFO web[org.sonar.INFO] Code colorizer, supported languages: java
2014.10.01 14:30:39 INFO web[o.s.s.n.NotificationService] Notification service started (delay 60 sec.)
2014.10.01 14:30:39 INFO web[o.s.s.s.IndexSynchronizer] Starting DB to Index synchronization
2014.10.01 14:30:39 INFO web[o.s.s.s.BaseIndex] Index rules:rules has last update of Wed Oct 01 13:39:50 EDT 2014
2014.10.01 14:30:42 INFO web[o.s.s.s.BaseIndex] Index rules:activeRules has last update of Wed Oct 01 13:40:01 EDT 2014
2014.10.01 14:30:42 INFO web[o.s.s.s.BaseIndex] Index logs:sonarLogs has last update of Wed Dec 31 19:00:00 EST 1969
2014.10.01 14:30:44 INFO web[o.s.s.s.IndexSynchronizer] Synchronization done in 5413ms...
2014.10.01 14:30:44 INFO web[org.sonar.INFO] Deploy GWT plugins...
2014.10.01 14:30:44 INFO web[org.sonar.INFO] Deploy GWT plugins done: 0 ms
2014.10.01 14:30:44 INFO web[org.sonar.INFO] Load metrics...
2014.10.01 14:30:45 INFO web[o.s.s.s.RegisterMetrics] Cleaning quality gate conditions
2014.10.01 14:30:45 INFO web[org.sonar.INFO] Load metrics done: 441 ms
2014.10.01 14:30:45 INFO web[o.s.s.s.RegisterDebtModel] Register technical debt model...
2014.10.01 14:30:45 INFO web[o.s.s.s.RegisterDebtModel] Register technical debt model done: 31 ms
2014.10.01 14:30:45 INFO web[org.sonar.INFO] Register rules...
2014.10.01 14:30:47 INFO web[org.sonar.INFO] Register rules done: 2256 ms
2014.10.01 14:30:47 INFO web[o.s.s.q.RegisterQualityProfiles] Register Quality Profiles...
2014.10.01 14:30:49 INFO web[o.s.s.q.RegisterQualityProfiles] Register Quality Profiles done: 2235 ms
2014.10.01 14:30:49 INFO web[o.s.s.s.RegisterNewMeasureFilters] Register measure filters...
2014.10.01 14:30:49 INFO web[o.s.s.s.RegisterNewMeasureFilters] Register measure filters done: 4 ms
2014.10.01 14:30:49 INFO web[o.s.s.s.RegisterDashboards] Register dashboards...
2014.10.01 14:30:49 INFO web[o.s.s.s.RegisterDashboards] Register dashboards done: 9 ms
2014.10.01 14:30:49 INFO web[o.s.s.s.RegisterPermissionTemplates] Register permission templates...
2014.10.01 14:30:49 INFO web[o.s.s.s.RegisterPermissionTemplates] Register permission templates done: 2 ms
2014.10.01 14:30:49 INFO web[o.s.s.s.RenameDeprecatedPropertyKeys] Rename deprecated property keys
2014.10.01 14:30:50 INFO web[jruby.rack] jruby 1.7.9 (ruby-1.8.7p370) 2013-12-06 87b108a on OpenJDK 64-Bit Server VM
1.7.0_55-mockbuild_2014_04_16_12_11-b00 [linux-amd64]
2014.10.01 14:30:50 INFO web[jruby.rack] using a shared (threadsafe!) runtime
2014.10.01 14:31:09 INFO web[jruby.rack] keeping custom (config.logger) Rails logger instance
2014.10.01 14:31:32 INFO web[o.a.c.u.SessionIdGenerator] Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took
[22,433] milliseconds.
2014.10.01 14:31:32 INFO web[o.s.s.app.Logging] Web server is started
2014.10.01 14:31:32 INFO app[o.s.p.m.Monitor] Process[web] is up
I got the exact same problem for 4.5. I even downgraded to 4.4 and got the same issue.
As far as I can tell, the newer versions use elastic search and that part is totally broken.
I downgraded to version 4.3.3 and it started OK (and I didn't have to do anything special to not lose the existing data in my mysql db - from 4.3.2)
I reviewed the logs in /logs and read connect refused to localhost:9092,
therefore I changed (first you must stop (Ctrl+C) Sonar Server) sonar.jdbc.url property in /conf/sonar.properties:
Before:
sonar.jdbc.url: jdbc:h2:tcp://localhost:9092/sonar
Modified:
sonar.jdbc.url: jdbc:h2:tcp://127.0.0.1:9092/sonar
If you have other problems, you should deactivate Java 8 in Windows (Control Panel > Java > user tab and system tab) and close all windows of control panel, then restart Windows.
Other additional steps if it's necessary:
edit sonar-runner\conf\sonar-runner.properties
modified property:
sonar.host.url=http://127.0.0.1:9000

Resources