Having trouble starting elasticsearch-2.0 - elasticsearch

I have recently run into an issue where I am not able to start elasticsearch.
Version-2.0
OS: Linux
An error message was displayed
[ERROR][gateway ] [Node] failed to read local state, exiting...
ElasticsearchException[must specify numberOfShards for index [version]]; nested: IllegalArgumentException[must specify numberOfShards for index [version]];
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:163)
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:309)
at org.elasticsearch.gateway.MetaStateService.loadIndexState(MetaStateService.java:112)
at org.elasticsearch.gateway.MetaStateService.loadFullState(MetaStateService.java:97)
at org.elasticsearch.gateway.GatewayMetaState.loadMetaState(GatewayMetaState.java:97)
at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:223)
at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:85)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)
How to start the elasticsearch?
Edit 01:
I have put the shards and replicas in the elasticsearch.yml files but still not able to start elasticsearch.
Getting the above error message.
Edit 02:
I have added the shards to the yml file.
#################################### Index ####################################
# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>
# for more information.
# Set the number of shards (splits) of an index (5 by default):
#
index.number_of_shards: 2
# Set the number of replicas (additional copies) of an index (1 by default):
#
index.number_of_replicas: 1

We went ahead and restored the server to the prior date when elasticsearch was working , and the issue was resolved.

Related

Failed to communicate with peer when load data using round robin load balancer in Apache NiFi cluster

I facing the issue of failed to communicate with peer when loading data using Round Robin load balancer from the queue in Apache-NiFi cluster.
I setup 3 node in the cluster and below is one of the nifi.properties setting from one of the node.
From the screenshot attached below, I have the GetFile processor which will read some text file in multiple lines of CSV format. So, when there are several files in the processor, it will put to the queue once read it. In the queue, I use round robin load balancer. So, when the started to load data into the queue, the error occur.
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.encryption.key.provider.implementation=
nifi.flowfile.repository.encryption.key.provider.location=
nifi.flowfile.repository.encryption.key.provider.password=
nifi.flowfile.repository.encryption.key.id=
nifi.flowfile.repository.encryption.key=
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.content.repository.encryption.key.provider.implementation=
nifi.content.repository.encryption.key.provider.location=
nifi.content.repository.encryption.key.provider.password=
nifi.content.repository.encryption.key.id=
nifi.content.repository.encryption.key=
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.provider.password=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component and Node Status History Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
# Volatile Status History Repository Properties
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# QuestDB Status History Repository Properties
nifi.status.repository.questdb.persist.node.days=14
nifi.status.repository.questdb.persist.component.days=3
nifi.status.repository.questdb.persist.location=./status_repository
# Site to Site properties
nifi.remote.input.host=node1
nifi.remote.input.secure=false
nifi.remote.input.socket.port=10001
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
#############################################
# For security, NiFi will present the UI on 127.0.0.1 and only be accessible through this loopback interface.
# Be aware that changing these properties may affect how your instance can be accessed without any restriction.
# We recommend configuring HTTPS instead. The administrators guide provides instructions on how to do this.
nifi.web.http.host=node1
nifi.web.http.port=8081
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.request.timeout=60 secs
nifi.web.request.ip.whitelist=
nifi.web.should.send.server.version=true
# Include or Exclude TLS Cipher Suites for HTTPS
nifi.web.https.ciphersuites.include=
nifi.web.https.ciphersuites.exclude=
# security properties #
nifi.sensitive.props.key=sf4eCVtTmnwRfMd5LarMMkKyTONuLvgE
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.p12
nifi.security.keystoreType=PKCS12
nifi.security.keystorePasswd=df9e762b67b2c74eb1ea147be8d7ecf0
nifi.security.keyPasswd=df9e762b67b2c74eb1ea147be8d7ecf0
nifi.security.truststore=./conf/truststore.p12
nifi.security.truststoreType=PKCS12
nifi.security.truststorePasswd=6d115b1494e9dd5112c2d2dc0608bc85
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
nifi.security.user.oidc.fallback.claims.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# SAML Properties #
nifi.security.user.saml.idp.metadata.url=
nifi.security.user.saml.sp.entity.id=
nifi.security.user.saml.identity.attribute.name=
nifi.security.user.saml.group.attribute.name=
nifi.security.user.saml.metadata.signing.enabled=false
nifi.security.user.saml.request.signing.enabled=false
nifi.security.user.saml.want.assertions.signed=true
nifi.security.user.saml.signature.algorithm=http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
nifi.security.user.saml.signature.digest.algorithm=http://www.w3.org/2001/04/xmlenc#sha256
nifi.security.user.saml.message.logging.enabled=false
nifi.security.user.saml.authentication.expiration=12 hours
nifi.security.user.saml.single.logout.enabled=false
nifi.security.user.saml.http.client.truststore.strategy=JDK
nifi.security.user.saml.http.client.connect.timeout=30 secs
nifi.security.user.saml.http.client.read.timeout=30 secs
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1#$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.security.identity.mapping.value.kerb=$1#$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.heartbeat.missable.max=8
nifi.cluster.protocol.is.secure=false
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=node1
nifi.cluster.node.protocol.port=9991
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=60 sec
nifi.cluster.node.read.timeout=60 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=node1
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=2
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=node1:2181,node2:2182,node3:2183
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.client.secure=false
nifi.zookeeper.security.keystore=
nifi.zookeeper.security.keystoreType=
nifi.zookeeper.security.keystorePasswd=
nifi.zookeeper.security.truststore=
nifi.zookeeper.security.truststoreType=
nifi.zookeeper.security.truststorePasswd=
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
# runtime monitoring properties
nifi.monitor.long.running.task.schedule=
nifi.monitor.long.running.task.threshold=
Hopefully someone can give some advise.
Thanks in advance.

Kibana Max Attempted Reached while exporting CVS file

In the "Discover" section, as we would like to export the result in a csv file,
The result is around 1 Million documents, when we try to export in csv, it never works...
produces error: Max attempts reached (3), But when tried to export small document around 50K it works fine and created the report successfully.
This is what my configuration looks in kibana.yml
## x-pack module setting
xpack.reporting.queue.timeout: 3600000
xpack.reporting.csv.maxSizeBytes: 1024000000
xpack.reporting.csv.scroll.size: 10000
server.maxPayloadBytes: 1073741824
and this is the congiuguration in elasticseaerch.yml
http.compression: true
http.max_content_length: 1024mb
Didn't understand what causing the issue.

Setup and configuration of JanusGraph for a Spark cluster and Cassandra

I am running JanusGraph (0.1.0) with Spark (1.6.1) on a single machine.
I did my configuration as described here.
When accessing the graph on the gremlin-console with the SparkGraphComputer, it is always empty. I cannot find any error in the logfiles, it is just an empty graph.
Is anyone using JanusGraph with Spark and can share his configuration and properties?
Using a JanusGraph, I get the expected Output:
gremlin> graph=JanusGraphFactory.open('conf/test.properties')
==>standardjanusgraph[cassandrathrift:[127.0.0.1]]
gremlin> g=graph.traversal()
==>graphtraversalsource[standardjanusgraph[cassandrathrift:[127.0.0.1]], standard]
gremlin> g.V().count()
14:26:10 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes
==>1000001
gremlin>
Using a HadoopGraph with Spark as GraphComputer, the graph is empty:
gremlin> graph=GraphFactory.open('conf/test.properties')
==>hadoopgraph[cassandrainputformat->gryooutputformat]
gremlin> g=graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cassandrainputformat->gryooutputformat], sparkgraphcomputer]
gremlin> g.V().count()
==>0==============================================> (14 + 1) / 15]
My conf/test.properties:
#
# Hadoop Graph Configuration
#
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.cassandra.CassandraInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
gremlin.hadoop.memoryOutputFormat=org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
gremlin.hadoop.memoryOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
gremlin.hadoop.deriveMemory=false
gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output
#
# Titan Cassandra InputFormat configuration
#
janusgraphmr.ioformat.conf.storage.backend=cassandrathrift
janusgraphmr.ioformat.conf.storage.hostname=127.0.0.1
janusgraphmr.ioformat.conf.storage.keyspace=janusgraph
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
storage.keyspace=janusgraph
#
# Apache Cassandra InputFormat configuration
#
cassandra.input.partitioner.class=org.apache.cassandra.dht.Murmur3Partitioner
cassandra.input.keyspace=janusgraph
cassandra.input.predicate=0c00020b0001000000000b000200000000020003000800047fffffff0000
cassandra.input.columnfamily=edgestore
cassandra.range.batch.size=2147483647
#
# SparkGraphComputer Configuration
#
spark.master=spark://127.0.0.1:7077
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.executor.memory=100g
gremlin.spark.persistContext=true
gremlin.hadoop.defaultGraphComputer=org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer
HDFS seems to be configured correctly as described here:
gremlin> hdfs
==>storage[DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_178390072_1, ugi=cassandra (auth:SIMPLE)]]]
Try fixing these properties:
janusgraphmr.ioformat.conf.storage.keyspace=janusgraph
storage.keyspace=janusgraph
Replace with:
janusgraphmr.ioformat.conf.storage.cassandra.keyspace=janusgraph
storage.cassandra.keyspace=janusgraph
The default keyspace name is janusgraph, so despite the mistakes on the property names, I don't think you would have observed that problem unless you loaded your data using a different keyspace name.
The latter property is described in the Configuration Reference. Also, keep an eye on this open issue to improve the docs for Hadoop-Graph usage.

Error during SonarQube Scanner execution with SCM provider autodetection failed

I am using sonar swift plugin and setup sonar in osx.I am using sonar 6.1 , sonar-runner 2.8 and backelite-sonar-swift-plugin-0.2.4.jar file. After run ./run-sonar-swift.sh it gives following error. I am using mysql DB. Sonar connect with DB and create all the tables but it end up with a error.
Done linting! Found 34 violations, 2 serious in 4 files.
Running Lizard.....Running SonarQube using SonarQube Runner..WARN: Property 'sonar.jdbc.url' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
WARN: Property 'sonar.jdbc.username' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
WARN: Property 'sonar.jdbc.password' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
....WARN: SCM provider autodetection failed. No SCM provider claims to support this project. Please use sonar.scm.provider to define SCM of your project.
.ERROR: Error during SonarQube Scanner execution
ERROR: File [moduleKey=abc.Krish, relative=KrishTests/KrishTests.swift, basedir=/Users/vjayah1/Desktop/Krish] can't be indexed twice. Please check that inclusion/exclusion patterns produce disjoint sets for main and test files
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
ERROR - Command 'sonar-runner ' failed with error code: 1
sonar-runner.properties (/usr/local/Cellar/sonar-runner/2.8/libexec/conf)
#Configure here general information about the environment, such as SonarQube DB details for example
#No information about specific project should appear here
#----- Default SonarQube server
sonar.host.url=http://localhost:9000
#----- Default source code encoding
#sonar.sourceEncoding=UTF-8
#----- Global database settings (not used for SonarQube 5.2+)
sonar.jdbc.username=root
sonar.jdbc.password=1234
#----- PostgreSQL
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- MySQL
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Oracle
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- Microsoft SQLServer
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
sonar.properties file (/usr/local/Cellar/sonarqube/6.1/libexec/conf)
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
sonar.jdbc.username=root
sonar.jdbc.password=1234
#----- Embedded Database (default)
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.6 or greater
# Only InnoDB storage engine is supported (not myISAM).
# Only the bundled driver is supported. It can not be changed.
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Oracle 11g/12c
# - Only thin client is supported
# - Only versions 11.2.x and 12.x of Oracle JDBC driver are supported
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost:1521/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- Microsoft SQLServer 2012/2014 and SQL Azure
# A database named sonar must exist and its collation must be case-sensitive (CS) and accent-sensitive (AS)
# Use the following connection string if you want to use integrated security with Microsoft Sql Server
# Do not set sonar.jdbc.username or sonar.jdbc.password property if you are using Integrated Security
# For Integrated Security to work, you have to download the Microsoft SQL JDBC driver package from
# http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774
# and copy sqljdbc_auth.dll to your path. You have to copy the 32 bit or 64 bit version of the dll
# depending upon the architecture of your server machine.
# This version of SonarQube has been tested with Microsoft SQL JDBC version 4.1
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
# Use the following connection string if you want to use SQL Auth while connecting to MS Sql Server.
# Set the sonar.jdbc.username and sonar.jdbc.password appropriately.
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
# The recommended value is 1.2 * max sizes of HTTP pools. For example if HTTP ports are
# enabled with default sizes (50, see property sonar.web.http.maxThreads)
# then sonar.jdbc.maxActive should be 1.2 * (50) = 120.
#sonar.jdbc.maxActive=60
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
# Startup can be long if entropy source is short of entropy. Adding
# -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
# See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
sonar.web.port=9000
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50.
#sonar.web.http.maxThreads=50
# The minimum number of threads always kept running. The default value is 5.
#sonar.web.http.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25.
#sonar.web.http.acceptCount=25
# By default users are logged out and sessions closed when server is restarted.
# If you prefer keeping user sessions open, a secret should be defined. Value is
# HS256 key encoded with base64. It must be unique for each installation of SonarQube.
# Example of command-line:
# echo -n "type_what_you_want" | openssl dgst -sha256 -hmac "key" -binary | base64
#sonar.auth.jwtBase64Hs256Secret=
#--------------------------------------------------------------------------------------------------
# COMPUTE ENGINE
# The Compute Engine is responsible for processing background tasks.
# Compute Engine is executed in a dedicated Java process. Default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.ce.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.ce.javaAdditionalOpts=
# The number of workers in the Compute Engine. Value must be greater than zero.
# By default the Compute Engine uses a single worker and therefore processes tasks one at a time.
# Recommendations:
#
# Using N workers will require N times as much Heap memory (see property
# sonar.ce.javaOpts to tune heap) and produce N times as much IOs on disk, database and
# Elasticsearch. The number of workers must suit your environment.
#sonar.ce.workerCount=1
#--------------------------------------------------------------------------------------------------
# ELASTICSEARCH
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process. Default heap size is 1Gb.
# JVM options of Elasticsearch process
# Recommendations:
#
# Use HotSpot Server VM. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.search.javaOpts=-Xmx1G -Xms256m -Xss256k -Djna.nosys=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# As a security precaution, should be blocked by a firewall and not exposed to the Internet.
#sonar.search.port=9001
# Elasticsearch host. The search server will bind this address and the search client will connect to it.
# Default is 127.0.0.1.
# As a security precaution, should NOT be set to a publicly available address.
#sonar.search.host=127.0.0.1
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request https://update.sonarsource.org
# It is enabled by default.
#sonar.updatecenter.activate=true
# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# HTTPS proxy (defaults are values of http.proxyHost and http.proxyPort)
#https.proxyHost=
#https.proxyPort=
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# Proxy authentication (used for HTTP, HTTPS and SOCKS proxies)
#http.proxyUser=
#http.proxyPassword=
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of logs. Supported values are INFO(default), DEBUG and TRACE (DEBUG + SQL + ES requests)
#sonar.log.level=INFO
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
# Rolling policy of log files
# - based on time if value starts with "time:", for example by day ("time:yyyy-MM-dd")
# or by month ("time:yyyy-MM")
# - based on size if value starts with "size:", for example "size:10MB"
# - disabled if value is "none". That needs logs to be managed by an external system like logrotate.
#sonar.log.rollingPolicy=time:yyyy-MM-dd
# Maximum number of files to keep if a rolling policy is enabled.
# - maximum value is 20 on size rolling policy
# - unlimited on time rolling policy. Set to zero to disable old file purging.
#sonar.log.maxFiles=7
# Access log is the list of all the HTTP requests received by server. If enabled, it is stored
# in the file {sonar.path.logs}/access.log. This file follows the same rolling policy as for
# sonar.log (see sonar.log.rollingPolicy and sonar.log.maxFiles).
#sonar.web.accessLogs.enable=true
# Format of access log. It is ignored if sonar.web.accessLogs.enable=false. Possible values are:
# - "common" is the Common Log Format, shortcut to: %h %l %u %user %date "%r" %s %b
# - "combined" is another format widely recognized, shortcut to: %h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# - else a custom pattern. See http://logback.qos.ch/manual/layouts.html#AccessPatternLayout.
# The login of authenticated user is not implemented with "%u" but with "%reqAttribute{LOGIN}" (since version 6.1).
# The value displayed for anonymous users is "-".
# If SonarQube is behind a reverse proxy, then the following value allows to display the correct remote IP address:
#sonar.web.accessLogs.pattern=%i{X-Forwarded-For} %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# Default value is:
#sonar.web.accessLogs.pattern=combined
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Elasticsearch HTTP connector, for example for KOPF:
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=-1
sonar-project.properties (project root folder)
##########################
# Required configuration #
##########################
sonar.projectKey=abc.Krish
sonar.projectName=Krish
sonar.projectVersion=1.0
# Comment if you have a project with mixed ObjC / Swift
sonar.language=swift
# Project description
sonar.projectDescription=prjDescription
# Path to source directories
sonar.sources=/Users/vjayah1/Desktop/Krish
# Path to test directories (comment if no test)
sonar.tests=/Users/vjayah1/Desktop/Krish/KrishTests
# Destination Simulator to run tests
# As string expected in destination argument of xcodebuild command
# Example = sonar.swift.simulator=platform=iOS Simulator,name=iPhone 6,OS=9.2
sonar.swift.simulator=platform=iOS Simulator,name=iPhone 7,OS=10.1
# Xcode project configuration (.xcodeproj)
# and use the later to specify which project(s) to include in the analysis (comma separated list)
# Specify either xcodeproj or xcodeproj + xcworkspace
#sonar.swift.project=MyPrj.xcodeproj
#sonar.swift.workspace=MyWrkSpc.xcworkspace
sonar.swift.project=Krish.xcodeproj
# Scheme to build your application
sonar.swift.appScheme=Krish
# Configuration to use for your scheme. if you do not specify that the default will be Debug
sonar.swift.appConfiguration=Debug
##########################
# Optional configuration #
##########################
# Encoding of the source code
sonar.sourceEncoding=UTF-8
# SCM
# sonar.scm.enabled=true
# sonar.scm.url=scm:git:http://xxx
# JUnit report generated by run-sonar.sh is stored in sonar-reports/TEST-report.xml
# Change it only if you generate the file on your own
# The XML files have to be prefixed by TEST- otherwise they are not processed
# sonar.junit.reportsPath=sonar-reports/
# Lizard report generated by run-sonar.sh is stored in sonar-reports/lizard-report.xml
# Change it only if you generate the file on your own
# sonar.swift.lizard.report=sonar-reports/lizard-report.xml
# Cobertura report generated by run-sonar.sh is stored in sonar-reports/coverage.xml
# Change it only if you generate the file on your own
# sonar.swift.coverage.reportPattern=sonar-reports/coverage*.xml
# OCLint report generated by run-sonar.sh is stored in sonar-reports/oclint.xml
# Change it only if you generate the file on your own
# sonar.swift.swiftlint.report=sonar-reports/*swiftlint.txt
# Paths to exclude from coverage report (tests, 3rd party libraries etc.)
# sonar.swift.excludedPathsFromCoverage=pattern1,pattern2
sonar.swift.excludedPathsFromCoverage=.*Tests.*
This is a configuration issue:
# Path to source directories
sonar.sources=/Users/vjayah1/Desktop/Krish
# Path to test directories (comment if no test)
sonar.tests=/Users/vjayah1/Desktop/Krish/KrishTests
Source folders are indexed recursively. So /Users/vjayah1/Desktop/Krish will also contains /Users/vjayah1/Desktop/Krish/KrishTests
You should exclude test folder from main sources:
sonar.exclusions=/Users/vjayah1/Desktop/Krish/KrishTests/**/*

logstash elasticsearch unassigned shards

I'm having trouble trying to figure out why this is happening and how to fix it:
Note that the first logstash index which has all shards assigned is only in that state because I manually assigned them.
Everything had been working as expected for several months until just today I noticed that shards have been laying around unassigned on all my logstash indexes.
Using Elasticsearch v1.4.0
My Elasticsearch logs appear as follows up until July 29, 2015:
[2015-07-29 00:01:53,352][DEBUG][discovery.zen.publish ] [Thor] received cluster state version 4827
Also, there is a script running on the server which trims the number of logstash indexes down to 30. I think that is where this type of log line comes from:
[2015-07-29 17:43:12,800][DEBUG][gateway.local.state.meta ] [Thor] [logstash-2015.01.25] deleting index that is no longer part of the metadata (indices: [[kibana-int, logstash-2015.07.11, logstash-2015.07.04, logstash-2015.07.03, logstash-2015.07.12, logstash-2015.07.21, logstash-2015.07.17, users_1416418645233, logstash-2015.07.20, logstash-2015.07.25, logstash-2015.07.06, logstash-2015.07.28, logstash-2015.01.24, logstash-2015.07.18, logstash-2015.07.26, logstash-2015.07.08, logstash-2015.07.19, logstash-2015.07.09, logstash-2015.07.22, logstash-2015.07.07, logstash-2015.07.29, logstash-2015.07.10, logstash-2015.07.05, logstash-2015.07.01, logstash-2015.07.16, logstash-2015.07.24, logstash-2015.07.02, logstash-2015.07.27, logstash-2015.07.14, logstash-2015.07.13, logstash-2015.07.23, logstash-2015.06.30, logstash-2015.07.15]])
On July 29, there are a few new entries which I haven't seen before:
[2015-07-29 00:01:38,024][DEBUG][action.bulk ] [Thor] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2015-07-29 17:12:58,658][INFO ][cluster.routing.allocation.decider] [Thor] updating [cluster.routing.allocation.enable] from [ALL] to [NONE]
[2015-07-29 17:12:58,658][INFO ][cluster.routing.allocation.decider] [Thor] updating [cluster.routing.allocation.disable_allocation] from [false] to [true]

Resources