I would like to disable NewRelic logs on my Ruby application, but only on test environment.
Since in the documentation we only have the log levels to set, and I want to avoid any logging, is there any option I can use to disable them?
Here is my newrelic.yml:
#
# This file configures the New Relic Agent. New Relic monitors Ruby, Java,
# .NET, PHP, Python and Node applications with deep visibility and low
# overhead. For more information, visit www.newrelic.com.
#
# Generated April 27, 2016, for version 3.15.2.317
#
# For full documentation of agent configuration options, please refer to
# https://docs.newrelic.com/docs/agents/ruby-agent/installation-configuration/ruby-agent-configuration
common: &default_settings
# Required license key associated with your New Relic account.
license_key: <%= ENV["NEW_RELIC_LICENSE_KEY"] %>
# Your application name. Renaming here affects where data displays in New
# Relic. For more details, see https://docs.newrelic.com/docs/apm/new-relic-apm/maintenance/renaming-applications
app_name: <%= ENV["NEW_RELIC_APP_NAME"] || 'Components' %>
# To disable the agent regardless of other settings, uncomment the following:
# agent_enabled: false
#
# Enable or disable the transmission of data to the New Relic collector).
monitor_mode: <%= ENV["NO_INSTRUMENTATION"] == '1' ? false : true %>
# Log level for agent logging: error, warn, info or debug.
log_level: <%= ENV["NEW_RELIC_DEBUG_LEVEL"] || 'info' %>
# Enable or disable SSL for transmissions to the New Relic collector).
# Defaults to true in versions 3.5.6 and higher.
ssl: true
# Enable or disable high security mode, a suite of security features
high_security: false
# Enable or disable synchronous connection to the New Relic data collection
# service during application startup.
sync_startup: false
# Enable or disable the exit handler that sends data to the New Relic
# collector) before shutting down.
send_data_on_exit: true
# Maximum number of seconds to attempt to contact the New Relic collector).
timeout: 120
# =============================== Transaction Tracer ================================
#
# The transaction traces feature collects detailed information from a
# selection of transactions, including a summary of the calling sequence, a
# breakdown of time spent, and a list of SQL queries and their query plans
# (on mysql and postgresql). Available features depend on your New Relic
# subscription level.
transaction_tracer:
# Enable or disable transaction traces.
enabled: true
# The agent will collect traces for transactions that exceed this time
# threshold (in seconds). Specify a float value or apdex_f.
#
# apdex_f is the response time above which a transaction is considered
# "frustrating." Defaults to four times apdex_t. Requests which complete
# in less than apdex_t are rated satisfied. Requests which take more than
# apdex_t, but less than four times apdex_t (apdex_f), are tolerated. Any
# requests which take longer than apdex_f are rated frustrated.
transaction_threshold: <%= ENV['NEW_RELIC_TRANSACTION_THRESHOLD'] || 'apdex_f' %>
# Determines whether Redis command arguments should be recorded within
# Transaction Traces.
record_redis_arguments: false
# Threshold (in seconds) above which the agent will collect explain plans.
# Relevant only when explain_enabled is true.
explain_threshold: 0.2
# Enable or disable the collection of explain plans in transaction traces.
# This setting will also apply to explain plans in Slow SQL traces if
# slow_sql.explain_enabled is not set separately.
explain_enabled: true
# Stack traces will be included in transaction trace nodes when their duration
# exceeds this threshold.
stack_trace_threshold: 0.5
# Maximum number of transaction trace nodes to record in a single
# transaction trace.
limit_segments: 4000
# =============================== Error Collector ================================
#
# The agent collects and reports all uncaught exceptions by default. These
# configuration options allow you to customize the error collection.
error_collector:
# Enable or disable recording of traced errors and error count metrics.
enabled: true
# ============================== Heroku ===============================
heroku:
use_dyno_names: true
# ============================== Thread Profiler ===============================
thread_profiler:
enabled: true
# Environment-specific settings are in this section.
# RAILS_ENV or RACK_ENV (as appropriate) is used to determine the environment.
# If your application has other named environments, configure them here.
development:
<<: *default_settings
app_name: Orchestrator (Development)
# NOTE: There is substantial overhead when running in developer mode.
# Do not use for production or load testing.
developer_mode: true
test:
<<: *default_settings
# It doesn't make sense to report to New Relic from automated test runs.
monitor_mode: false
staging:
<<: *default_settings
app_name: Components (Staging)
production:
<<: *default_settings
After some reading, I've found this post: https://discuss.newrelic.com/t/stop-logging-in-newrelic-agent-log-file/39876/3
I've adapted the code for yaml and ended with:
test:
<<: *default_settings
# It doesn't make sense to report to New Relic from automated test runs.
monitor_mode: false
logging:
enabled: false
and the problem is now solved!
You have log_level: off
Also a "hacky way", if you don't have log folder it won't write log...
the current working directory should contain a log directory
Related
I have created Hygieia dashboard and trying to configure sonar project. I have followed below hygieia-codequality-sonar-collector document and running sonar collector.
https://github.com/Hygieia/Hygieia/blob/gh-pages/pages/hygieia/collectors/build/sonar.md
Instead of sonar.usernames and sonar.passwords, I have given sonar.tokens[0]= details in application.properies. But, getting the below login error.
ERROR c.c.d.collector.DefaultSonar6Client - org.springframework.web.client.HttpClientErrorException: 401
2022-08-16 04:19:00,304 [taskScheduler-1] ERROR c.c.d.collector.SonarCollectorTask - org.springframework.web.client.HttpClientErrorException: 401
2022-08-16 04:19:00,304 [taskScheduler-1] INFO c.c.d.collect
# Database Name
dbname=dashboarddb
# Database HostName - default is localhost
dbhost=localhost
# Database Port - default is 27017
dbport=27017
# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset]
dbhostport=[host1:port1,host2:port2,host3:port3]
# Database Username - default is blank
dbusername=dashboarduser
# Database Password - default is blank
dbpassword=dbpassword
# Collector schedule (required)
sonar.cron=0 0/5 * * * *
# Sonar server(s) (required) - Can provide multiple
sonar.servers[0]=https://abc.company.com
# Sonar version, match array index to the server. If not set, will default to version prior to 6.3.
sonar.versions[0]=8.6
# Sonar Metrics - Required.
#Sonar versions lesser than 6.3
# Sonar tokens to connect to authenticated url
sonar.tokens[0]=xxxxxxxxx
#sonar.metrics[0]=ncloc,line_coverage,violations,critical_violations,major_violations,blocker_violations,violations_density,sqale_index,test_success_density,test_failures,test_errors,tests
# For Sonar version 6.3 and above
sonar.metrics[0]=ncloc,violations,new_vulnerabilities,critical_violations,major_violations,blocker_violations,tests,test_success_density,test_errors,test_failures,coverage,line_coverage,sqale_index,alert_status,quality_gate_details
# Sonar login credentials
# Format: username1,username2,username3,etc.
#sonar.usernames=
# Format: password1,password2,password3,etc.
#sonar.passwords=
Please help me on the same.
I have fixed token based sonar login issue after updated sonar.servers[0] URL
Correct application.properties file:
# Database Name
dbname=dashboarddb
# Database HostName - default is localhost
dbhost=localhost
# Database Port - default is 27017
dbport=27017
# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset]
dbhostport=[host1:port1,host2:port2,host3:port3]
# Database Username - default is blank
dbusername=dashboarduser
# Database Password - default is blank
dbpassword=dbpassword
# Collector schedule (required)
sonar.cron=0 0/5 * * * *
# Sonar server(s) (required) - Can provide multiple
sonar.servers[0]=https://api-token#abc.company.com
# Sonar version, match array index to the server. If not set, will default to version prior to 6.3.
sonar.versions[0]=8.6
# Sonar Metrics - Required.
#Sonar versions lesser than 6.3
# Sonar tokens to connect to authenticated url
sonar.tokens[0]=xxxxxxxxx
#sonar.metrics[0]=ncloc,line_coverage,violations,critical_violations,major_violations,blocker_violations,violations_density,sqale_index,test_success_density,test_failures,test_errors,tests
# For Sonar version 6.3 and above
sonar.metrics[0]=ncloc,violations,new_vulnerabilities,critical_violations,major_violations,blocker_violations,tests,test_success_density,test_errors,test_failures,coverage,line_coverage,sqale_index,alert_status,quality_gate_details
# Sonar login credentials
# Format: username1,username2,username3,etc.
#sonar.usernames=
# Format: password1,password2,password3,etc.
#sonar.passwords=
Thank you.
I facing the issue of failed to communicate with peer when loading data using Round Robin load balancer from the queue in Apache-NiFi cluster.
I setup 3 node in the cluster and below is one of the nifi.properties setting from one of the node.
From the screenshot attached below, I have the GetFile processor which will read some text file in multiple lines of CSV format. So, when there are several files in the processor, it will put to the queue once read it. In the queue, I use round robin load balancer. So, when the started to load data into the queue, the error occur.
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.encryption.key.provider.implementation=
nifi.flowfile.repository.encryption.key.provider.location=
nifi.flowfile.repository.encryption.key.provider.password=
nifi.flowfile.repository.encryption.key.id=
nifi.flowfile.repository.encryption.key=
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.content.repository.encryption.key.provider.implementation=
nifi.content.repository.encryption.key.provider.location=
nifi.content.repository.encryption.key.provider.password=
nifi.content.repository.encryption.key.id=
nifi.content.repository.encryption.key=
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.provider.password=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component and Node Status History Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
# Volatile Status History Repository Properties
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# QuestDB Status History Repository Properties
nifi.status.repository.questdb.persist.node.days=14
nifi.status.repository.questdb.persist.component.days=3
nifi.status.repository.questdb.persist.location=./status_repository
# Site to Site properties
nifi.remote.input.host=node1
nifi.remote.input.secure=false
nifi.remote.input.socket.port=10001
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
#############################################
# For security, NiFi will present the UI on 127.0.0.1 and only be accessible through this loopback interface.
# Be aware that changing these properties may affect how your instance can be accessed without any restriction.
# We recommend configuring HTTPS instead. The administrators guide provides instructions on how to do this.
nifi.web.http.host=node1
nifi.web.http.port=8081
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.request.timeout=60 secs
nifi.web.request.ip.whitelist=
nifi.web.should.send.server.version=true
# Include or Exclude TLS Cipher Suites for HTTPS
nifi.web.https.ciphersuites.include=
nifi.web.https.ciphersuites.exclude=
# security properties #
nifi.sensitive.props.key=sf4eCVtTmnwRfMd5LarMMkKyTONuLvgE
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.p12
nifi.security.keystoreType=PKCS12
nifi.security.keystorePasswd=df9e762b67b2c74eb1ea147be8d7ecf0
nifi.security.keyPasswd=df9e762b67b2c74eb1ea147be8d7ecf0
nifi.security.truststore=./conf/truststore.p12
nifi.security.truststoreType=PKCS12
nifi.security.truststorePasswd=6d115b1494e9dd5112c2d2dc0608bc85
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
nifi.security.user.oidc.fallback.claims.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# SAML Properties #
nifi.security.user.saml.idp.metadata.url=
nifi.security.user.saml.sp.entity.id=
nifi.security.user.saml.identity.attribute.name=
nifi.security.user.saml.group.attribute.name=
nifi.security.user.saml.metadata.signing.enabled=false
nifi.security.user.saml.request.signing.enabled=false
nifi.security.user.saml.want.assertions.signed=true
nifi.security.user.saml.signature.algorithm=http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
nifi.security.user.saml.signature.digest.algorithm=http://www.w3.org/2001/04/xmlenc#sha256
nifi.security.user.saml.message.logging.enabled=false
nifi.security.user.saml.authentication.expiration=12 hours
nifi.security.user.saml.single.logout.enabled=false
nifi.security.user.saml.http.client.truststore.strategy=JDK
nifi.security.user.saml.http.client.connect.timeout=30 secs
nifi.security.user.saml.http.client.read.timeout=30 secs
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1#$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.security.identity.mapping.value.kerb=$1#$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.heartbeat.missable.max=8
nifi.cluster.protocol.is.secure=false
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=node1
nifi.cluster.node.protocol.port=9991
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=60 sec
nifi.cluster.node.read.timeout=60 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=node1
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=2
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=node1:2181,node2:2182,node3:2183
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.client.secure=false
nifi.zookeeper.security.keystore=
nifi.zookeeper.security.keystoreType=
nifi.zookeeper.security.keystorePasswd=
nifi.zookeeper.security.truststore=
nifi.zookeeper.security.truststoreType=
nifi.zookeeper.security.truststorePasswd=
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
# runtime monitoring properties
nifi.monitor.long.running.task.schedule=
nifi.monitor.long.running.task.threshold=
Hopefully someone can give some advise.
Thanks in advance.
I am using sonar swift plugin and setup sonar in osx.I am using sonar 6.1 , sonar-runner 2.8 and backelite-sonar-swift-plugin-0.2.4.jar file. After run ./run-sonar-swift.sh it gives following error. I am using mysql DB. Sonar connect with DB and create all the tables but it end up with a error.
Done linting! Found 34 violations, 2 serious in 4 files.
Running Lizard.....Running SonarQube using SonarQube Runner..WARN: Property 'sonar.jdbc.url' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
WARN: Property 'sonar.jdbc.username' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
WARN: Property 'sonar.jdbc.password' is not supported any more. It will be ignored. There is no longer any DB connection to the SQ database.
....WARN: SCM provider autodetection failed. No SCM provider claims to support this project. Please use sonar.scm.provider to define SCM of your project.
.ERROR: Error during SonarQube Scanner execution
ERROR: File [moduleKey=abc.Krish, relative=KrishTests/KrishTests.swift, basedir=/Users/vjayah1/Desktop/Krish] can't be indexed twice. Please check that inclusion/exclusion patterns produce disjoint sets for main and test files
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
ERROR - Command 'sonar-runner ' failed with error code: 1
sonar-runner.properties (/usr/local/Cellar/sonar-runner/2.8/libexec/conf)
#Configure here general information about the environment, such as SonarQube DB details for example
#No information about specific project should appear here
#----- Default SonarQube server
sonar.host.url=http://localhost:9000
#----- Default source code encoding
#sonar.sourceEncoding=UTF-8
#----- Global database settings (not used for SonarQube 5.2+)
sonar.jdbc.username=root
sonar.jdbc.password=1234
#----- PostgreSQL
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- MySQL
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Oracle
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- Microsoft SQLServer
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
sonar.properties file (/usr/local/Cellar/sonarqube/6.1/libexec/conf)
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
sonar.jdbc.username=root
sonar.jdbc.password=1234
#----- Embedded Database (default)
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.6 or greater
# Only InnoDB storage engine is supported (not myISAM).
# Only the bundled driver is supported. It can not be changed.
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Oracle 11g/12c
# - Only thin client is supported
# - Only versions 11.2.x and 12.x of Oracle JDBC driver are supported
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost:1521/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- Microsoft SQLServer 2012/2014 and SQL Azure
# A database named sonar must exist and its collation must be case-sensitive (CS) and accent-sensitive (AS)
# Use the following connection string if you want to use integrated security with Microsoft Sql Server
# Do not set sonar.jdbc.username or sonar.jdbc.password property if you are using Integrated Security
# For Integrated Security to work, you have to download the Microsoft SQL JDBC driver package from
# http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774
# and copy sqljdbc_auth.dll to your path. You have to copy the 32 bit or 64 bit version of the dll
# depending upon the architecture of your server machine.
# This version of SonarQube has been tested with Microsoft SQL JDBC version 4.1
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
# Use the following connection string if you want to use SQL Auth while connecting to MS Sql Server.
# Set the sonar.jdbc.username and sonar.jdbc.password appropriately.
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
# The recommended value is 1.2 * max sizes of HTTP pools. For example if HTTP ports are
# enabled with default sizes (50, see property sonar.web.http.maxThreads)
# then sonar.jdbc.maxActive should be 1.2 * (50) = 120.
#sonar.jdbc.maxActive=60
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
# Startup can be long if entropy source is short of entropy. Adding
# -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
# See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
sonar.web.port=9000
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50.
#sonar.web.http.maxThreads=50
# The minimum number of threads always kept running. The default value is 5.
#sonar.web.http.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25.
#sonar.web.http.acceptCount=25
# By default users are logged out and sessions closed when server is restarted.
# If you prefer keeping user sessions open, a secret should be defined. Value is
# HS256 key encoded with base64. It must be unique for each installation of SonarQube.
# Example of command-line:
# echo -n "type_what_you_want" | openssl dgst -sha256 -hmac "key" -binary | base64
#sonar.auth.jwtBase64Hs256Secret=
#--------------------------------------------------------------------------------------------------
# COMPUTE ENGINE
# The Compute Engine is responsible for processing background tasks.
# Compute Engine is executed in a dedicated Java process. Default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.ce.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.ce.javaAdditionalOpts=
# The number of workers in the Compute Engine. Value must be greater than zero.
# By default the Compute Engine uses a single worker and therefore processes tasks one at a time.
# Recommendations:
#
# Using N workers will require N times as much Heap memory (see property
# sonar.ce.javaOpts to tune heap) and produce N times as much IOs on disk, database and
# Elasticsearch. The number of workers must suit your environment.
#sonar.ce.workerCount=1
#--------------------------------------------------------------------------------------------------
# ELASTICSEARCH
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process. Default heap size is 1Gb.
# JVM options of Elasticsearch process
# Recommendations:
#
# Use HotSpot Server VM. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.search.javaOpts=-Xmx1G -Xms256m -Xss256k -Djna.nosys=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# As a security precaution, should be blocked by a firewall and not exposed to the Internet.
#sonar.search.port=9001
# Elasticsearch host. The search server will bind this address and the search client will connect to it.
# Default is 127.0.0.1.
# As a security precaution, should NOT be set to a publicly available address.
#sonar.search.host=127.0.0.1
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request https://update.sonarsource.org
# It is enabled by default.
#sonar.updatecenter.activate=true
# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# HTTPS proxy (defaults are values of http.proxyHost and http.proxyPort)
#https.proxyHost=
#https.proxyPort=
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# Proxy authentication (used for HTTP, HTTPS and SOCKS proxies)
#http.proxyUser=
#http.proxyPassword=
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of logs. Supported values are INFO(default), DEBUG and TRACE (DEBUG + SQL + ES requests)
#sonar.log.level=INFO
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
# Rolling policy of log files
# - based on time if value starts with "time:", for example by day ("time:yyyy-MM-dd")
# or by month ("time:yyyy-MM")
# - based on size if value starts with "size:", for example "size:10MB"
# - disabled if value is "none". That needs logs to be managed by an external system like logrotate.
#sonar.log.rollingPolicy=time:yyyy-MM-dd
# Maximum number of files to keep if a rolling policy is enabled.
# - maximum value is 20 on size rolling policy
# - unlimited on time rolling policy. Set to zero to disable old file purging.
#sonar.log.maxFiles=7
# Access log is the list of all the HTTP requests received by server. If enabled, it is stored
# in the file {sonar.path.logs}/access.log. This file follows the same rolling policy as for
# sonar.log (see sonar.log.rollingPolicy and sonar.log.maxFiles).
#sonar.web.accessLogs.enable=true
# Format of access log. It is ignored if sonar.web.accessLogs.enable=false. Possible values are:
# - "common" is the Common Log Format, shortcut to: %h %l %u %user %date "%r" %s %b
# - "combined" is another format widely recognized, shortcut to: %h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# - else a custom pattern. See http://logback.qos.ch/manual/layouts.html#AccessPatternLayout.
# The login of authenticated user is not implemented with "%u" but with "%reqAttribute{LOGIN}" (since version 6.1).
# The value displayed for anonymous users is "-".
# If SonarQube is behind a reverse proxy, then the following value allows to display the correct remote IP address:
#sonar.web.accessLogs.pattern=%i{X-Forwarded-For} %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# Default value is:
#sonar.web.accessLogs.pattern=combined
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Elasticsearch HTTP connector, for example for KOPF:
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=-1
sonar-project.properties (project root folder)
##########################
# Required configuration #
##########################
sonar.projectKey=abc.Krish
sonar.projectName=Krish
sonar.projectVersion=1.0
# Comment if you have a project with mixed ObjC / Swift
sonar.language=swift
# Project description
sonar.projectDescription=prjDescription
# Path to source directories
sonar.sources=/Users/vjayah1/Desktop/Krish
# Path to test directories (comment if no test)
sonar.tests=/Users/vjayah1/Desktop/Krish/KrishTests
# Destination Simulator to run tests
# As string expected in destination argument of xcodebuild command
# Example = sonar.swift.simulator=platform=iOS Simulator,name=iPhone 6,OS=9.2
sonar.swift.simulator=platform=iOS Simulator,name=iPhone 7,OS=10.1
# Xcode project configuration (.xcodeproj)
# and use the later to specify which project(s) to include in the analysis (comma separated list)
# Specify either xcodeproj or xcodeproj + xcworkspace
#sonar.swift.project=MyPrj.xcodeproj
#sonar.swift.workspace=MyWrkSpc.xcworkspace
sonar.swift.project=Krish.xcodeproj
# Scheme to build your application
sonar.swift.appScheme=Krish
# Configuration to use for your scheme. if you do not specify that the default will be Debug
sonar.swift.appConfiguration=Debug
##########################
# Optional configuration #
##########################
# Encoding of the source code
sonar.sourceEncoding=UTF-8
# SCM
# sonar.scm.enabled=true
# sonar.scm.url=scm:git:http://xxx
# JUnit report generated by run-sonar.sh is stored in sonar-reports/TEST-report.xml
# Change it only if you generate the file on your own
# The XML files have to be prefixed by TEST- otherwise they are not processed
# sonar.junit.reportsPath=sonar-reports/
# Lizard report generated by run-sonar.sh is stored in sonar-reports/lizard-report.xml
# Change it only if you generate the file on your own
# sonar.swift.lizard.report=sonar-reports/lizard-report.xml
# Cobertura report generated by run-sonar.sh is stored in sonar-reports/coverage.xml
# Change it only if you generate the file on your own
# sonar.swift.coverage.reportPattern=sonar-reports/coverage*.xml
# OCLint report generated by run-sonar.sh is stored in sonar-reports/oclint.xml
# Change it only if you generate the file on your own
# sonar.swift.swiftlint.report=sonar-reports/*swiftlint.txt
# Paths to exclude from coverage report (tests, 3rd party libraries etc.)
# sonar.swift.excludedPathsFromCoverage=pattern1,pattern2
sonar.swift.excludedPathsFromCoverage=.*Tests.*
This is a configuration issue:
# Path to source directories
sonar.sources=/Users/vjayah1/Desktop/Krish
# Path to test directories (comment if no test)
sonar.tests=/Users/vjayah1/Desktop/Krish/KrishTests
Source folders are indexed recursively. So /Users/vjayah1/Desktop/Krish will also contains /Users/vjayah1/Desktop/Krish/KrishTests
You should exclude test folder from main sources:
sonar.exclusions=/Users/vjayah1/Desktop/Krish/KrishTests/**/*
I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
How I can send rescued exceptions to NewRelic ?
I have a test file rpm.rb:
require 'newrelic_rpm'
NewRelic::Agent.manual_start
begin
"2" + 3
rescue TypeError => e
puts "whoa !"
NewRelic::Agent.agent.error_collector.notice_error( e )
end
I start it with:
NEWRELIC_ENABLE=true ruby rpm.rb
The content of log/newrelic_agent.log:
[05/14/13 ... (87691)] INFO : Reading configuration from config/newrelic.yml
[05/14/13 ... (87691)] INFO : Environment: development
[05/14/13 ... (87691)] WARN : No dispatcher detected.
[05/14/13 ... (87691)] INFO : Application: xxx (Development)
[05/14/13 ... (87691)] INFO : Installing Net instrumentation
[05/14/13 ... (87691)] INFO : Audit log enabled at '.../log/newrelic_audit.log'
[05/14/13 ... (87691)] INFO : Finished instrumentation
[05/14/13 ... (87691)] INFO : Reading configuration from config/newrelic.yml
[05/14/13 ... (87691)] INFO : Starting Agent shutdown
The content of log/newrelic_audit.log
[2013-05-14 ... (87691)] : REQUEST: collector.newrelic.com:443/agent_listener/12/74901a11b7ff1a69aba11d1797830c8c1af41d56/get_redirect_host?marshal_format=json
[2013-05-14 ... (87691)] : REQUEST BODY: []
Nothing is reported to NewRelic, why ?
I saw this already: Is there way to push NewRelic error manually?
I just spent an hour trying to test this from production console.
This is what finally got it working:
Make sure monitor_mode: true is set in newrelic.yml for the appropriate environment
Make sure to run the rails console with NEW_RELIC_AGENT_ENABLED=true NEWRELIC_ENABLE=true rails c
Make sure to use the public-api method call NewRelic::Agent.notice_error(exception)
Naturally, .notice_error will work as expected when called from a non-console process like the web-server.
You need to set monitor_mode: true in config/newrelic.yml
development:
<<: *default_settings
# Turn off communication to New Relic service in development mode (also
# 'enabled').
# NOTE: for initial evaluation purposes, you may want to temporarily
# turn the agent on in development mode.
monitor_mode: true
# Rails Only - when running in Developer Mode, the New Relic Agent will
# present performance information on the last 100 transactions you have
# executed since starting the mongrel.
# NOTE: There is substantial overhead when running in developer mode.
# Do not use for production or load testing.
developer_mode: true