Running two SonarQube instances in Windows Server 2008 R2 - sonarqube

I've already run one SonarQube instance at port 9000 and able access it at address: localhost:9000.
Now I would like to run another SonarQube instance for my new project at port 10000. I've changed in sonar.properties file:
sonar.web.port: 10000
sonar.web.context: /
However, when I run C:\SonarMAP\bin\windows-x86-64\StartSonar.bat, I got the ERROR message:
wrapper | ERROR: Another instance of the SonarQube application is already running.
Press any key to continue . . .
I do some research to solve this problem but can't find any helpful information.
Any suggestion ? Thanks !
UPDATE
The instance 1 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url=jdbc:postgresql://server15/sonarQube
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive=20
sonar.jdbc.maxIdle=5
sonar.jdbc.minIdle=2
sonar.jdbc.maxWait=5000
sonar.jdbc.minEvictableIdleTimeMillis=600000
sonar.jdbc.timeBetweenEvictionRunsMillis=30000
The instance 2 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url: jdbc:postgresql://localhost/sonarMAP
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive: 20
sonar.jdbc.maxIdle: 5
sonar.jdbc.minIdle: 2
sonar.jdbc.maxWait: 5000
sonar.jdbc.minEvictableIdleTimeMillis: 600000
sonar.jdbc.timeBetweenEvictionRunsMillis: 30000
sonar.web.port: 9100
sonar.web.context: /
sonar.search.port=9101
sonar.notifications.delay: 60

Apparently you can't run multiple instances on Windows because of wrapper.single_invocation=true in conf/wrapper.conf.
Setting it to false seems to allow this (you'll still have to use different ports as Fabrice explained in his answer) but this is getting into grey zone: non recommended and non tested setup.

You need to change other settings inside the conf/sonar.properties file, namely:
sonar.search.port: the port of Elasticsearch
sonar.search.httpPort: if you enabled it in the first instance, you've got to change it as well
and obviously you can't connect to the same schema of the same DB

Related

RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.
This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

Connecting to Redis hosted in docker on Windows 10 Pro

I have just setup redis using docker in windows and I am trying to write a simple .Net console app to connect to it to familiarize myself with things.
I am running docker in windows 10 pro using "docker run redis:windowsservercore"
The redis instance looks like it boots up fine and will display:
[1904] 21 Mar 13:18:50.835 # Server started, Redis version 3.2.100
[1904] 21 Mar 13:18:50.847 - The server is now ready to accept connections on port 6379
But in my app when I try to connect to localhost:6379 it seems as if it can't find the redis instance at all:
var log = new StringWriter();
try
{
StackExchange.Redis.ConnectionMultiplexer RedisConnection = StackExchange.Redis.ConnectionMultiplexer.Connect("localhost:6379", log); // This line errors
var cache = RedisConnection.GetDatabase(1);
cache.StringSet("Test", "Hello World");
Console.WriteLine(cache.StringGet("Test"));
}
catch (Exception ex) { }
The connection above always fails with:
"It was not possible to connect to the redis server(s). UnableToConnect on localhost:6379/Interactive, Initializing/NotStarted, last: NONE, origin: BeginConnectAsync, outstanding: 0, last-read: 2s ago, last-write: 2s ago, keep-alive: 60s, state: Connecting, mgr: 10 of 10 available, last-heartbeat: never, global: 9s ago, v: 2.1.0.1"
I just can't figure out what I am doing wrong here. I have tried connecting using other naming methods for localhost. I disabled all firewalls for public and private networks to ensure this wasn't the issue.
I am note sure what I need to do here to get a connection established, hopefully someone has some advice for me.
Thanks everyone in advance

Hive Browser Throwing Error

I am trying to put some basic query in hive editor in hue browser , but it is returning the following error whereas my Hivecli works fine and able to execute queries. Could someone help me?
Fetching results ran into the following error(s):
Bad status for request TFetchResultsReq(fetchType=1,
operationHandle=TOperationHandle(hasResultSet=True,
modifiedRowCount=None, operationType=0,
operationId=THandleIdentifier(secret='r\t\x80\xac\x1a\xa0K\xf8\xa4\xa0\x85?\x03!\x88\xa9',
guid='\x852\x0c\x87b\x7fJ\xe2\x9f\xee\x00\xc9\xeeo\x06\xbc')),
orientation=4, maxRows=-1):
TFetchResultsResp(status=TStatus(errorCode=0, errorMessage="Couldn't
find log associated with operation handle: OperationHandle
[opType=EXECUTE_STATEMENT,
getHandleIdentifier()=85320c87-627f-4ae2-9fee-00c9ee6f06bc]",
sqlState=None,
infoMessages=["*org.apache.hive.service.cli.HiveSQLException:Couldn't
find log associated with operation handle: OperationHandle
[opType=EXECUTE_STATEMENT,
getHandleIdentifier()=85320c87-627f-4ae2-9fee-00c9ee6f06bc]:24:23",
'org.apache.hive.service.cli.operation.OperationManager:getOperationLogRowSet:OperationManager.java:229',
'org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:687',
'sun.reflect.GeneratedMethodAccessor14:invoke::-1',
'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43',
'java.lang.reflect.Method:invoke:Method.java:606',
'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78',
'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36',
'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63',
'java.security.AccessController:doPrivileged:AccessController.java:-2',
'javax.security.auth.Subject:doAs:Subject.java:415',
'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1657',
'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59',
'com.sun.proxy.$Proxy19:fetchResults::-1',
'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:454',
'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:672',
'org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553',
'org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1538',
'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39',
'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56',
'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145',
'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615',
'java.lang.Thread:run:Thread.java:745'], statusCode=3), results=None,
hasMoreRows=None)
This error could be either due to HiveServer2 not running or Hue does not have access to hive_conf_dir.
Check whether the HiveServer2 has been started and is running. It uses the port 10000 by default.
netstat -ntpl | grep 10000
If it is not running, start the HiveServer2
$HIVE_HOME/bin/hiveserver2
Also check the Hue configuration file hue.ini. The hive_conf_dir property must be set under [beeswax] section. If not set, add this property under [beeswax]
hive_conf_dir=$HIVE_HOME/conf
Restart supervisor after making these changes.

What is the right configuration of titan db 1.0 running against ES deployed on google/aws cloud

I'm using titan 1.0 with ES 1.51 running internally as a service (127.0.0.1), and it is working pretty well.
My working ES configuration is :
storage.backend=cassandra
storage.hostname=cassandraserver2-cassandra-00
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
query.fast-property=true
index.search.backend=elasticsearch
index.search.hostname=localhost
index.search.elasticsearch.interface=NODE
Now I want to redeploy the ES into the cloud , but unfortunately titan isn't up.
The exception i get is:
gremlin> tg = TitanFactory.open('../conf/titan-db.properties')
Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
Display stack trace? [yN] y
java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:473)
at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:460)
at com.t...
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
What is the right configuration of titan properties to run against elasticsearch service on google/aws cloud ??
suppose the external ip of the "VM" is 8.35.193.69 and i reach this machine with ping
I'm using titan-db properties:
storage.backend=cassandra
storage.hostname=cassandraserver2-cassandra-00
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
query.fast-property=true
index.search.backend=elasticsearch
index.search.hostname=8.35.193.69
index.search.client-only=true
index.search.local-mode=false
index.search.elasticsearch.interface=NODE
any solutions are welcome
You need to make sure port 9300 is open on your instance. If it's not open, you need to:
Ensure ES service is up sudo service elasticsearch status
Ensure port 9300 is open and accepting requests. Check how here
If the port is closed, you need to enable TCP transport communication. Check here
Change your configuration to look like this:
# elasticsearch config
index.search.backend=elasticsearch
index.search.elasticsearch.interface=TRANSPORT_CLIENT
index.search.hostname=your_ip:9300

Unresponsive socket after x time (puma - ruby)

I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time

Resources