How to mention token based sonar server URL in hygieia-codequality-sonar-collector application..properties - sonarqube

I have created Hygieia dashboard and trying to configure sonar project. I have followed below hygieia-codequality-sonar-collector document and running sonar collector.
https://github.com/Hygieia/Hygieia/blob/gh-pages/pages/hygieia/collectors/build/sonar.md
Instead of sonar.usernames and sonar.passwords, I have given sonar.tokens[0]= details in application.properies. But, getting the below login error.
ERROR c.c.d.collector.DefaultSonar6Client - org.springframework.web.client.HttpClientErrorException: 401
2022-08-16 04:19:00,304 [taskScheduler-1] ERROR c.c.d.collector.SonarCollectorTask - org.springframework.web.client.HttpClientErrorException: 401
2022-08-16 04:19:00,304 [taskScheduler-1] INFO c.c.d.collect
# Database Name
dbname=dashboarddb
# Database HostName - default is localhost
dbhost=localhost
# Database Port - default is 27017
dbport=27017
# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset]
dbhostport=[host1:port1,host2:port2,host3:port3]
# Database Username - default is blank
dbusername=dashboarduser
# Database Password - default is blank
dbpassword=dbpassword
# Collector schedule (required)
sonar.cron=0 0/5 * * * *
# Sonar server(s) (required) - Can provide multiple
sonar.servers[0]=https://abc.company.com
# Sonar version, match array index to the server. If not set, will default to version prior to 6.3.
sonar.versions[0]=8.6
# Sonar Metrics - Required.
#Sonar versions lesser than 6.3
# Sonar tokens to connect to authenticated url
sonar.tokens[0]=xxxxxxxxx
#sonar.metrics[0]=ncloc,line_coverage,violations,critical_violations,major_violations,blocker_violations,violations_density,sqale_index,test_success_density,test_failures,test_errors,tests
# For Sonar version 6.3 and above
sonar.metrics[0]=ncloc,violations,new_vulnerabilities,critical_violations,major_violations,blocker_violations,tests,test_success_density,test_errors,test_failures,coverage,line_coverage,sqale_index,alert_status,quality_gate_details
# Sonar login credentials
# Format: username1,username2,username3,etc.
#sonar.usernames=
# Format: password1,password2,password3,etc.
#sonar.passwords=
Please help me on the same.

I have fixed token based sonar login issue after updated sonar.servers[0] URL
Correct application.properties file:
# Database Name
dbname=dashboarddb
# Database HostName - default is localhost
dbhost=localhost
# Database Port - default is 27017
dbport=27017
# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset]
dbhostport=[host1:port1,host2:port2,host3:port3]
# Database Username - default is blank
dbusername=dashboarduser
# Database Password - default is blank
dbpassword=dbpassword
# Collector schedule (required)
sonar.cron=0 0/5 * * * *
# Sonar server(s) (required) - Can provide multiple
sonar.servers[0]=https://api-token#abc.company.com
# Sonar version, match array index to the server. If not set, will default to version prior to 6.3.
sonar.versions[0]=8.6
# Sonar Metrics - Required.
#Sonar versions lesser than 6.3
# Sonar tokens to connect to authenticated url
sonar.tokens[0]=xxxxxxxxx
#sonar.metrics[0]=ncloc,line_coverage,violations,critical_violations,major_violations,blocker_violations,violations_density,sqale_index,test_success_density,test_failures,test_errors,tests
# For Sonar version 6.3 and above
sonar.metrics[0]=ncloc,violations,new_vulnerabilities,critical_violations,major_violations,blocker_violations,tests,test_success_density,test_errors,test_failures,coverage,line_coverage,sqale_index,alert_status,quality_gate_details
# Sonar login credentials
# Format: username1,username2,username3,etc.
#sonar.usernames=
# Format: password1,password2,password3,etc.
#sonar.passwords=
Thank you.

Related

Failed to communicate with peer when load data using round robin load balancer in Apache NiFi cluster

I facing the issue of failed to communicate with peer when loading data using Round Robin load balancer from the queue in Apache-NiFi cluster.
I setup 3 node in the cluster and below is one of the nifi.properties setting from one of the node.
From the screenshot attached below, I have the GetFile processor which will read some text file in multiple lines of CSV format. So, when there are several files in the processor, it will put to the queue once read it. In the queue, I use round robin load balancer. So, when the started to load data into the queue, the error occur.
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.encryption.key.provider.implementation=
nifi.flowfile.repository.encryption.key.provider.location=
nifi.flowfile.repository.encryption.key.provider.password=
nifi.flowfile.repository.encryption.key.id=
nifi.flowfile.repository.encryption.key=
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.content.repository.encryption.key.provider.implementation=
nifi.content.repository.encryption.key.provider.location=
nifi.content.repository.encryption.key.provider.password=
nifi.content.repository.encryption.key.id=
nifi.content.repository.encryption.key=
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.provider.password=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component and Node Status History Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
# Volatile Status History Repository Properties
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# QuestDB Status History Repository Properties
nifi.status.repository.questdb.persist.node.days=14
nifi.status.repository.questdb.persist.component.days=3
nifi.status.repository.questdb.persist.location=./status_repository
# Site to Site properties
nifi.remote.input.host=node1
nifi.remote.input.secure=false
nifi.remote.input.socket.port=10001
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
#############################################
# For security, NiFi will present the UI on 127.0.0.1 and only be accessible through this loopback interface.
# Be aware that changing these properties may affect how your instance can be accessed without any restriction.
# We recommend configuring HTTPS instead. The administrators guide provides instructions on how to do this.
nifi.web.http.host=node1
nifi.web.http.port=8081
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.request.timeout=60 secs
nifi.web.request.ip.whitelist=
nifi.web.should.send.server.version=true
# Include or Exclude TLS Cipher Suites for HTTPS
nifi.web.https.ciphersuites.include=
nifi.web.https.ciphersuites.exclude=
# security properties #
nifi.sensitive.props.key=sf4eCVtTmnwRfMd5LarMMkKyTONuLvgE
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.p12
nifi.security.keystoreType=PKCS12
nifi.security.keystorePasswd=df9e762b67b2c74eb1ea147be8d7ecf0
nifi.security.keyPasswd=df9e762b67b2c74eb1ea147be8d7ecf0
nifi.security.truststore=./conf/truststore.p12
nifi.security.truststoreType=PKCS12
nifi.security.truststorePasswd=6d115b1494e9dd5112c2d2dc0608bc85
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
nifi.security.user.oidc.fallback.claims.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# SAML Properties #
nifi.security.user.saml.idp.metadata.url=
nifi.security.user.saml.sp.entity.id=
nifi.security.user.saml.identity.attribute.name=
nifi.security.user.saml.group.attribute.name=
nifi.security.user.saml.metadata.signing.enabled=false
nifi.security.user.saml.request.signing.enabled=false
nifi.security.user.saml.want.assertions.signed=true
nifi.security.user.saml.signature.algorithm=http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
nifi.security.user.saml.signature.digest.algorithm=http://www.w3.org/2001/04/xmlenc#sha256
nifi.security.user.saml.message.logging.enabled=false
nifi.security.user.saml.authentication.expiration=12 hours
nifi.security.user.saml.single.logout.enabled=false
nifi.security.user.saml.http.client.truststore.strategy=JDK
nifi.security.user.saml.http.client.connect.timeout=30 secs
nifi.security.user.saml.http.client.read.timeout=30 secs
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1#$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.security.identity.mapping.value.kerb=$1#$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.heartbeat.missable.max=8
nifi.cluster.protocol.is.secure=false
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=node1
nifi.cluster.node.protocol.port=9991
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=60 sec
nifi.cluster.node.read.timeout=60 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=node1
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=2
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=node1:2181,node2:2182,node3:2183
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.client.secure=false
nifi.zookeeper.security.keystore=
nifi.zookeeper.security.keystoreType=
nifi.zookeeper.security.keystorePasswd=
nifi.zookeeper.security.truststore=
nifi.zookeeper.security.truststoreType=
nifi.zookeeper.security.truststorePasswd=
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
# runtime monitoring properties
nifi.monitor.long.running.task.schedule=
nifi.monitor.long.running.task.threshold=
Hopefully someone can give some advise.
Thanks in advance.

installation search guard failed

I'm trying to implement search-guard-5-5.6.3- in ES 5.6.3
while executing
./sgadmin_demo.sh
I get
WARNING: JAVA_HOME not set, will use /usr/bin/java
Search Guard Admin v5
Will connect to localhost:9300 ... done
### LICENSE NOTICE Search Guard ###
If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)
* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging
In case of any doubt mail to <sales#floragunn.com>
###################################
Contacting elasticsearch cluster 'searchguard_demo' and wait for YELLOW clusterstate ...
Clustername: searchguard_demo
Clusterstate: YELLOW
Number of nodes: 1
Number of data nodes: 1
searchguard index already exists, so we do not need to create one.
Populate config from /app/elasticsearch-5.6.3/plugins/search-guard- 5/sgconfig/
Will update 'config' with /app/elasticsearch-5.6.3/plugins/search-guard-5/sgconfig/sg_config.yml
SUCC: Configuration for 'config' created or updated
Will update 'roles' with /app/elasticsearch-5.6.3/plugins/search-guard-5/sgconfig/sg_roles.yml
SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with /app/elasticsearch-5.6.3/plugins/search-guard-5/sgconfig/sg_roles_mapping.yml
SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with /app/elasticsearch-5.6.3/plugins/search-guard-5/sgconfig/sg_internal_users.yml
SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with /app/elasticsearch-5.6.3/plugins/search- guard-5/sgconfig/sg_action_groups.yml
SUCC: Configuration for 'actiongroups' created or updated
Done with success
and wihle exuting
$ curl --insecure -u admin:admin 'localhost:9200/_searchguard/authinfo?pretty'
I am getting the following error
curl: (52) Empty reply from server
in ES log it says:
[2017-11-07T18:27:06,684][WARN ][c.f.s.h.SearchGuardHttpServerTransport]
[aN2lbPk] Someone (/127.0.0.1:44850) speaks http plaintext instead of ssl,
will close the channel
Is there any configuration that I might need to look into ?
You need to connect via HTTPS (not HTTP), that is why you see the "Someone (/127.0.0.1:44850) speaks http plaintext instead of ssl" log entry.
So try
$ curl --insecure -u admin:admin 'https://localhost:9200/_searchguard/authinfo?pretty'
The --insecure flag makes only sense together with HTTPS

Nexus 3.6 OSS Docker Hub Proxy - Can docker search but not docker pull

I've deployed Nexus OSS 3.6 and it's being served on http://server:8082/nexus
I have configured a docker-hub proxy using the instructions in http://www.sonatype.org/nexus/2017/02/16/using-nexus-3-as-your-repository-part-3-docker-images/ and have configured the docker-group to serve under port 18000
I can perform the following:
docker login server:18000
docker search server:18000/jenkins
but when I run:
docker pull server:18000/jenkins
i get the following error:
Error response from daemon: Get http://10.105.139.17:18000/v2/jenkins/manifests/latest:
error parsing HTTP 400 response body: invalid character '<'
looking for beginning of value:
"<html>\n<head>\n<meta http-equiv=\"Content-Type\"
content=\"text/html;charset=ISO-8859-1\"/>\n<title>
Error 400 </title>\n</head>\n<body>\n<h2>HTTP ERROR: 400</h2>\n
<p>Problem accessing /nexus/v2/token.
Reason:\n<pre> Not a Docker request</pre></p>\n<hr />
Powered by Jetty:// 9.3.20.v20170531<hr/>\n
</body>\n</html>\n"
My jetty nexus.properties config file is:
# Jetty section
application-port=8082
application-host=0.0.0.0
# nexus-args=${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml
nexus-context-path=/nexus
# Nexus section
# nexus-edition=nexus-pro-edition
# nexus-features=\
# nexus-pro-feature
Could anyone offer any suggestions on how to fix this please?
I have the same problem when I enabled the anonymous read on some docker repository.
Repositories->Docker hosted->Check the checkbox (Disable to allow anonymous pull) from the repository.
seems you need to upgrade Nexus to 3.6.1 according to :
https://issues.sonatype.org/browse/NEXUS-14488
in order to allow anonymous read again

Error during SonarQube Scanner execution with SCM provider autodetection failed

I am using sonar swift plugin and setup sonar in osx.I am using sonar 6.1 , sonar-runner 2.8 and backelite-sonar-swift-plugin-0.2.4.jar file. After run ./run-sonar-swift.sh it gives following error. I am using mysql DB. Sonar connect with DB and create all the tables but it end up with a error.
Done linting! Found 34 violations, 2 serious in 4 files.
Running Lizard.....Running SonarQube using SonarQube Runner..WARN: Property 'sonar.jdbc.url' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
WARN: Property 'sonar.jdbc.username' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
WARN: Property 'sonar.jdbc.password' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
....WARN: SCM provider autodetection failed. No SCM provider claims to support this project. Please use sonar.scm.provider to define SCM of your project.
.ERROR: Error during SonarQube Scanner execution
ERROR: File [moduleKey=abc.Krish, relative=KrishTests/KrishTests.swift, basedir=/Users/vjayah1/Desktop/Krish] can't be indexed twice. Please check that inclusion/exclusion patterns produce disjoint sets for main and test files
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
ERROR - Command 'sonar-runner ' failed with error code: 1
sonar-runner.properties (/usr/local/Cellar/sonar-runner/2.8/libexec/conf)
#Configure here general information about the environment, such as SonarQube DB details for example
#No information about specific project should appear here
#----- Default SonarQube server
sonar.host.url=http://localhost:9000
#----- Default source code encoding
#sonar.sourceEncoding=UTF-8
#----- Global database settings (not used for SonarQube 5.2+)
sonar.jdbc.username=root
sonar.jdbc.password=1234
#----- PostgreSQL
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- MySQL
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Oracle
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- Microsoft SQLServer
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
sonar.properties file (/usr/local/Cellar/sonarqube/6.1/libexec/conf)
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
sonar.jdbc.username=root
sonar.jdbc.password=1234
#----- Embedded Database (default)
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.6 or greater
# Only InnoDB storage engine is supported (not myISAM).
# Only the bundled driver is supported. It can not be changed.
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Oracle 11g/12c
# - Only thin client is supported
# - Only versions 11.2.x and 12.x of Oracle JDBC driver are supported
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost:1521/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- Microsoft SQLServer 2012/2014 and SQL Azure
# A database named sonar must exist and its collation must be case-sensitive (CS) and accent-sensitive (AS)
# Use the following connection string if you want to use integrated security with Microsoft Sql Server
# Do not set sonar.jdbc.username or sonar.jdbc.password property if you are using Integrated Security
# For Integrated Security to work, you have to download the Microsoft SQL JDBC driver package from
# http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774
# and copy sqljdbc_auth.dll to your path. You have to copy the 32 bit or 64 bit version of the dll
# depending upon the architecture of your server machine.
# This version of SonarQube has been tested with Microsoft SQL JDBC version 4.1
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
# Use the following connection string if you want to use SQL Auth while connecting to MS Sql Server.
# Set the sonar.jdbc.username and sonar.jdbc.password appropriately.
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
# The recommended value is 1.2 * max sizes of HTTP pools. For example if HTTP ports are
# enabled with default sizes (50, see property sonar.web.http.maxThreads)
# then sonar.jdbc.maxActive should be 1.2 * (50) = 120.
#sonar.jdbc.maxActive=60
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
# Startup can be long if entropy source is short of entropy. Adding
# -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
# See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
sonar.web.port=9000
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50.
#sonar.web.http.maxThreads=50
# The minimum number of threads always kept running. The default value is 5.
#sonar.web.http.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25.
#sonar.web.http.acceptCount=25
# By default users are logged out and sessions closed when server is restarted.
# If you prefer keeping user sessions open, a secret should be defined. Value is
# HS256 key encoded with base64. It must be unique for each installation of SonarQube.
# Example of command-line:
# echo -n "type_what_you_want" | openssl dgst -sha256 -hmac "key" -binary | base64
#sonar.auth.jwtBase64Hs256Secret=
#--------------------------------------------------------------------------------------------------
# COMPUTE ENGINE
# The Compute Engine is responsible for processing background tasks.
# Compute Engine is executed in a dedicated Java process. Default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.ce.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.ce.javaAdditionalOpts=
# The number of workers in the Compute Engine. Value must be greater than zero.
# By default the Compute Engine uses a single worker and therefore processes tasks one at a time.
# Recommendations:
#
# Using N workers will require N times as much Heap memory (see property
# sonar.ce.javaOpts to tune heap) and produce N times as much IOs on disk, database and
# Elasticsearch. The number of workers must suit your environment.
#sonar.ce.workerCount=1
#--------------------------------------------------------------------------------------------------
# ELASTICSEARCH
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process. Default heap size is 1Gb.
# JVM options of Elasticsearch process
# Recommendations:
#
# Use HotSpot Server VM. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.search.javaOpts=-Xmx1G -Xms256m -Xss256k -Djna.nosys=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# As a security precaution, should be blocked by a firewall and not exposed to the Internet.
#sonar.search.port=9001
# Elasticsearch host. The search server will bind this address and the search client will connect to it.
# Default is 127.0.0.1.
# As a security precaution, should NOT be set to a publicly available address.
#sonar.search.host=127.0.0.1
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request https://update.sonarsource.org
# It is enabled by default.
#sonar.updatecenter.activate=true
# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# HTTPS proxy (defaults are values of http.proxyHost and http.proxyPort)
#https.proxyHost=
#https.proxyPort=
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# Proxy authentication (used for HTTP, HTTPS and SOCKS proxies)
#http.proxyUser=
#http.proxyPassword=
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of logs. Supported values are INFO(default), DEBUG and TRACE (DEBUG + SQL + ES requests)
#sonar.log.level=INFO
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
# Rolling policy of log files
# - based on time if value starts with "time:", for example by day ("time:yyyy-MM-dd")
# or by month ("time:yyyy-MM")
# - based on size if value starts with "size:", for example "size:10MB"
# - disabled if value is "none". That needs logs to be managed by an external system like logrotate.
#sonar.log.rollingPolicy=time:yyyy-MM-dd
# Maximum number of files to keep if a rolling policy is enabled.
# - maximum value is 20 on size rolling policy
# - unlimited on time rolling policy. Set to zero to disable old file purging.
#sonar.log.maxFiles=7
# Access log is the list of all the HTTP requests received by server. If enabled, it is stored
# in the file {sonar.path.logs}/access.log. This file follows the same rolling policy as for
# sonar.log (see sonar.log.rollingPolicy and sonar.log.maxFiles).
#sonar.web.accessLogs.enable=true
# Format of access log. It is ignored if sonar.web.accessLogs.enable=false. Possible values are:
# - "common" is the Common Log Format, shortcut to: %h %l %u %user %date "%r" %s %b
# - "combined" is another format widely recognized, shortcut to: %h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# - else a custom pattern. See http://logback.qos.ch/manual/layouts.html#AccessPatternLayout.
# The login of authenticated user is not implemented with "%u" but with "%reqAttribute{LOGIN}" (since version 6.1).
# The value displayed for anonymous users is "-".
# If SonarQube is behind a reverse proxy, then the following value allows to display the correct remote IP address:
#sonar.web.accessLogs.pattern=%i{X-Forwarded-For} %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# Default value is:
#sonar.web.accessLogs.pattern=combined
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Elasticsearch HTTP connector, for example for KOPF:
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=-1
sonar-project.properties (project root folder)
##########################
# Required configuration #
##########################
sonar.projectKey=abc.Krish
sonar.projectName=Krish
sonar.projectVersion=1.0
# Comment if you have a project with mixed ObjC / Swift
sonar.language=swift
# Project description
sonar.projectDescription=prjDescription
# Path to source directories
sonar.sources=/Users/vjayah1/Desktop/Krish
# Path to test directories (comment if no test)
sonar.tests=/Users/vjayah1/Desktop/Krish/KrishTests
# Destination Simulator to run tests
# As string expected in destination argument of xcodebuild command
# Example = sonar.swift.simulator=platform=iOS Simulator,name=iPhone 6,OS=9.2
sonar.swift.simulator=platform=iOS Simulator,name=iPhone 7,OS=10.1
# Xcode project configuration (.xcodeproj)
# and use the later to specify which project(s) to include in the analysis (comma separated list)
# Specify either xcodeproj or xcodeproj + xcworkspace
#sonar.swift.project=MyPrj.xcodeproj
#sonar.swift.workspace=MyWrkSpc.xcworkspace
sonar.swift.project=Krish.xcodeproj
# Scheme to build your application
sonar.swift.appScheme=Krish
# Configuration to use for your scheme. if you do not specify that the default will be Debug
sonar.swift.appConfiguration=Debug
##########################
# Optional configuration #
##########################
# Encoding of the source code
sonar.sourceEncoding=UTF-8
# SCM
# sonar.scm.enabled=true
# sonar.scm.url=scm:git:http://xxx
# JUnit report generated by run-sonar.sh is stored in sonar-reports/TEST-report.xml
# Change it only if you generate the file on your own
# The XML files have to be prefixed by TEST- otherwise they are not processed
# sonar.junit.reportsPath=sonar-reports/
# Lizard report generated by run-sonar.sh is stored in sonar-reports/lizard-report.xml
# Change it only if you generate the file on your own
# sonar.swift.lizard.report=sonar-reports/lizard-report.xml
# Cobertura report generated by run-sonar.sh is stored in sonar-reports/coverage.xml
# Change it only if you generate the file on your own
# sonar.swift.coverage.reportPattern=sonar-reports/coverage*.xml
# OCLint report generated by run-sonar.sh is stored in sonar-reports/oclint.xml
# Change it only if you generate the file on your own
# sonar.swift.swiftlint.report=sonar-reports/*swiftlint.txt
# Paths to exclude from coverage report (tests, 3rd party libraries etc.)
# sonar.swift.excludedPathsFromCoverage=pattern1,pattern2
sonar.swift.excludedPathsFromCoverage=.*Tests.*
This is a configuration issue:
# Path to source directories
sonar.sources=/Users/vjayah1/Desktop/Krish
# Path to test directories (comment if no test)
sonar.tests=/Users/vjayah1/Desktop/Krish/KrishTests
Source folders are indexed recursively. So /Users/vjayah1/Desktop/Krish will also contains /Users/vjayah1/Desktop/Krish/KrishTests
You should exclude test folder from main sources:
sonar.exclusions=/Users/vjayah1/Desktop/Krish/KrishTests/**/*

"mean init" fails during admin user creation

I follow the basic installation tutorial on for the mean stack on OS X.
MongoDB is running and all dependencies are believed to be installed.
During init of my first app, the script fails when asking for the admin user password (it fails before I am allowed to input anything as my admin password):
$ mean init myApp
? What would you name your mean app? myApp
? The Mean project is currently in developer preview. To help improve the -
quality of this product, we collect anonymized data on how the mean-cli is used -
You may choose to opt out of this collection now (by choosing 'N' at the below prompt)
or at any time in the future by running the following command:
mean disable user-reporting
Do you want to help us improve the mean network (Y/n)? Y
? Please provide your email so we can create your first admin user: joe#doe.com
? Please provide your username so we can create your first admin user: admin
Cloning branch: master into destination folder: myApp
git clone --depth 1 -bmaster https://github.com/linnovate/mean.git "myApp"
Cloning into 'myApp'...
##############################################################
#
# !!! MONGOOSE WARNING !!!
#
# This is an UNSTABLE release of Mongoose.
# Unstable releases are available for preview/testing only.
# DO NOT run this in production.
#
##############################################################
DB connection successful!
Please provide password so we can create your first admin user
prompt: password: /usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/base.js:246
throw message;
^
TypeError: Cannot read property 'length' of undefined
at processResults (/usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1581:31)
at /usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1619:20
at /usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1157:7
at /usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1890:9
at Server.Base._callHandler (/usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at MongoReply.parseBody (/usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
at null.<anonymous> (/usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:439:20)
at emit (events.js:107:17)
at null.<anonymous> (/usr/local/lib/node_modules/mean-cli/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:201:13)
I am able to create users manually in the resulting app folder.
But the resulting application seems to be incomplete and I cannot start it.
$ mean user joe#doe.com --addRole admin;
Adding role `admin` to user `joe#doe.com`
##############################################################
#
# !!! MONGOOSE WARNING !!!
#
# This is an UNSTABLE release of Mongoose.
# Unstable releases are available for preview/testing only.
# DO NOT run this in production.
#
##############################################################
DB connection successful!
successfully updated
You need to install mongo database.
http://www.liquidweb.com/kb/how-to-install-mongodb-on-ubuntu-14-04/

Resources