When init nifi-toolkit/encrypt-config.sh, in nifi-registry.properties doesn't have encrypted password for nifi.registry.db.password - apache-nifi

We have a problem with nifi-toolkit. When init encrypt-config.sh, in nifi-registry.properties doesn't have encrypted password for nifi.registry.db.password.
$ ./nifi-toolkit-1.19.1/bin/encrypt-config.sh --nifiRegistry -b nifi-registry-1.19.1/conf/bootstrap.conf -r nifi-registry-1.19.1/conf/nifi-registry.properties -a nifi-registry-1.19.1/conf/authorizers.xml -p adminpassword
How resolves this problem?
Steps for reproduce the similar problem:
Version postrges – postgres:13.9
Download nifi-registry and nifi-toolkit and unzip these files.
https://www.apache.org/dyn/closer.lua?path=/nifi/1.19.1/nifi-registry-1.19.1-bin.zip
https://dlcdn.apache.org/nifi/1.19.1/nifi-toolkit-1.19.1-bin.zip
Edit nifi-registry-1.19.1/conf/nifi-registry.properties for connect to postgres DB.
nifi.registry.db.directory=./database
nifi.registry.db.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
nifi.registry.db.url=jdbc:postgresql://localhost:5432/nifireg
nifi.registry.db.driver.class=org.postgresql.Driver
nifi.registry.db.driver.directory=./conf/postgresql-42.5.1.jar
nifi.registry.db.username=postgres
nifi.registry.db.password=postgres
nifi.registry.db.maxConnections=5
nifi.registry.db.sql.debug=false
Create a database and a database user and grant privileges in postrges
postgres=# CREATE DATABASE nifireg;
postgres=# CREATE USER postgres WITH PASSWORD postgres;
postgres=# GRANT ALL PRIVILEGES ON DATABASE nifireg to postgres;
Update the H2 database file name to be nifi-registry.mv.db.
$ mv nifi-registry-1.19.1/database/nifi-registry-primary.mv.db nifi-registry-1.19.1/database/nifi-registry.mv.db
Start the Registry and check.
$ nifi-registry-1.19.1/bin/nifi-registry.sh start
$ nifi-registry-1.19.1/bin/nifi-registry.sh status
Init command for encryption.
$ ./nifi-toolkit-1.19.1/bin/encrypt-config.sh --nifiRegistry -b nifi-registry-1.19.1/conf/bootstrap.conf -r nifi-registry-1.19.1/conf/nifi-registry.properties -a nifi-registry-1.19.1/conf/authorizers.xml -p adminpassword
Config nifi-registry.properties:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# web properties
nifi.registry.web.war.directory=./lib
nifi.registry.web.http.host=
nifi.registry.web.http.port=18080
nifi.registry.web.https.host=
nifi.registry.web.https.port=
nifi.registry.web.https.application.protocols=http/1.1
nifi.registry.web.jetty.working.directory=./work/jetty
nifi.registry.web.jetty.threads=200
nifi.registry.web.should.send.server.version=true
# security properties
nifi.registry.security.keystore=
nifi.registry.security.keystoreType=
nifi.registry.security.keystorePasswd=
nifi.registry.security.keyPasswd=
nifi.registry.security.truststore=
nifi.registry.security.truststoreType=
nifi.registry.security.truststorePasswd=
nifi.registry.security.needClientAuth=
nifi.registry.security.authorizers.configuration.file=./conf/authorizers.xml
nifi.registry.security.authorizer=managed-authorizer
nifi.registry.security.identity.providers.configuration.file=./conf/identity-providers.xml
nifi.registry.security.identity.provider=
# sensitive property protection properties
# nifi.registry.sensitive.props.additional.keys=
# providers properties
nifi.registry.providers.configuration.file=./conf/providers.xml
# registry alias properties
nifi.registry.registry.alias.configuration.file=./conf/registry-aliases.xml
# extensions working dir
nifi.registry.extensions.working.directory=./work/extensions
# legacy database properties, used to migrate data from original DB to new DB below
# NOTE: Users upgrading from 0.1.0 should leave these populated, but new installs after 0.1.0 should leave these empty
nifi.registry.db.directory=./database
nifi.registry.db.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
nifi.registry.db.url=jdbc:postgresql://localhost:5432/nifireg
nifi.registry.db.driver.class=org.postgresql.Driver
nifi.registry.db.driver.directory=./conf/postgresql-42.5.1.jar
nifi.registry.db.username=postgres
nifi.registry.db.password=postgres
nifi.registry.db.maxConnections=5
nifi.registry.db.sql.debug=false
# extension directories
# Each property beginning with "nifi.registry.extension.dir." will be treated as location for an extension,
# and a class loader will be created for each location, with the system class loader as the parent
#
\#nifi.registry.extension.dir.1=/path/to/extension1
\#nifi.registry.extension.dir.2=/path/to/extension2
nifi.registry.extension.dir.aws=./ext/aws/lib
# Identity Mapping Properties
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.registry.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.registry.security.identity.mapping.value.dn=$1#$2
# nifi.registry.security.identity.mapping.transform.dn=NONE
# nifi.registry.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.registry.security.identity.mapping.value.kerb=$1#$2
# nifi.registry.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.registry.security.group.mapping.pattern.anygroup=^(.\*)$
# nifi.registry.security.group.mapping.value.anygroup=$1
# nifi.registry.security.group.mapping.transform.anygroup=LOWER
# kerberos properties
nifi.registry.kerberos.krb5.file=
nifi.registry.kerberos.spnego.principal=
nifi.registry.kerberos.spnego.keytab.location=
nifi.registry.kerberos.spnego.authentication.expiration=12 hours
# OIDC
nifi.registry.security.user.oidc.discovery.url=
nifi.registry.security.user.oidc.connect.timeout=
nifi.registry.security.user.oidc.read.timeout=
nifi.registry.security.user.oidc.client.id=
nifi.registry.security.user.oidc.client.secret=
nifi.registry.security.user.oidc.preferred.jwsalgorithm=
# revision management
# This feature should remain disabled until a future NiFi release that supports the revision API changes
nifi.registry.revisions.enabled=false

We found a solution.
We set a property nifi.registry.sensitive.props.additional.keys=nifi.registry.db.password in nifi-registry.properties, and nifi.registry.db.password encrypted.
More about nifi.registry.sensitive.props.additional.keys - https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#encrypted-passwords-in-configuration-files:~:text=values%20in%20the-,nifi.registry.sensitive.props.additional.keys,-property.

Related

Kafka MongoDB Sink Connector Error! Not starting

I am trying to connect Kafka with MongoDB on Linux Ubuntu 20.04 , previously it was working fine but now I am facing an error while running.
Here is how I am trying to connect Kafka with MongoDb.
I have made a separate connect-standalone_bare.properties file which contains the following entities:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# These are defaults. This file just demonstrates how to override some settings.
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.converters.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=false
value.converter.schemas.enable=false
rest.port:8084
offset.storage.file.filename=/tmp/connect.offsets-1
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/home/ms-batch18/Documents/kafka_2.13-2.8.0/libs/mongo-kafka-connect-1.2.0-all.jar
And this my MongoDb sink connector:
name=mongo-sink
topics=test
connector.class=com.mongodb.kafka.connect.MongoSinkConnector
tasks.max=1
key.ignore=true
connection.uri=mongodb://localhost:27017
database=test_kafka
collection=transaction
max.num.retries=3
retries.defer.timeout=5000
type.name=kafka-connect
schemas.enable=false
While running this command:
bin/connect-standalone.sh config/connect-standalone_bare.properties config/MongoSinkConnector.properties
I am facing this error:
(org.apache.kafka.connect.runtime.WorkerInfo:71)
[2021-08-24 12:09:14,826] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:126)
java.nio.file.NoSuchFileException: config/connect-standalone_bare.properties
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
at java.base/java.nio.file.Files.newByteChannel(Files.java:371)
at java.base/java.nio.file.Files.newByteChannel(Files.java:422)
at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
at java.base/java.nio.file.Files.newInputStream(Files.java:156)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:629)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:616)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:75)
The error is saying the property file you provided doesn't exist in the relative path of your shell.
Give the absolute path to verify it does exist.
It's also recommended to use connect-distributed instead

Comprehensive reference to sqoop.properties with version

I am looking for a reference to sqoop.properties where it explains the details about each property. This sqoop.properties file on github is good. Is there any other reference? I was not able to isolate this file on our installation. we use only one property currently:
jdbc.transaction.isolation=TRANSACTION_READ_UNCOMMITTED
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# Sqoop configuration file used by the built in configuration
# provider: org.apache.sqoop.core.PropertiesConfigurationProvider.
# This file must reside in the system configuration directory
# which is specified by the system property "sqoop.config.dir"
# and must be called sqoop.properties.
#
#
# Logging Configuration
# Any property that starts with the prefix
# org.apache.sqoop.log4j is parsed out by the configuration
# system and passed to the log4j subsystem. This allows you
# to specify log4j configuration properties from within the
# Sqoop configuration.
#
org.apache.sqoop.log4j.appender.file=org.apache.log4j.RollingFileAppender
org.apache.sqoop.log4j.appender.file.File=/var/log/sqoop/sqoop.log
org.apache.sqoop.log4j.appender.file.MaxFileSize=25MB
org.apache.sqoop.log4j.appender.file.MaxBackupIndex=5
org.apache.sqoop.log4j.appender.file.layout=org.apache.log4j.PatternLayout
org.apache.sqoop.log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} [%l] %m%n
org.apache.sqoop.log4j.debug=true
org.apache.sqoop.log4j.rootCategory=WARN, file
org.apache.sqoop.log4j.category.org.apache.sqoop=DEBUG
org.apache.sqoop.log4j.category.org.apache.derby=INFO
#
# Repository configuration
# The Repository subsystem provides the special prefix which
# is "org.apache.sqoop.repository.sysprop". Any property that
# is specified with this prefix is parsed out and set as a
# system property. For example, if the built in Derby repository
# is being used, the sysprop prefixed proeprties can be used
# to affect Derby configuration at startup time by setting
# the appropriate system properties.
#
# Repository provider
org.apache.sqoop.repository.provider=org.apache.sqoop.repository.JdbcRepositoryProvider
# JDBC repository provider configuration
org.apache.sqoop.repository.jdbc.handler=org.apache.sqoop.repository.derby.DerbyRepositoryHandler
org.apache.sqoop.repository.jdbc.transaction.isolation=READ_COMMITTED
org.apache.sqoop.repository.jdbc.maximum.connections=10
org.apache.sqoop.repository.jdbc.url=jdbc:derby:/var/lib/sqoop/repository/db;create=true
org.apache.sqoop.repository.jdbc.create.schema=true
org.apache.sqoop.repository.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
org.apache.sqoop.repository.jdbc.user=sa
org.apache.sqoop.repository.jdbc.password=
# System properties for embedded Derby configuration
org.apache.sqoop.repository.sysprop.derby.stream.error.file=/var/log/sqoop/derbyrepo.log
# Sleeping period for reloading configuration file (once a minute)
org.apache.sqoop.core.configuration.provider.properties.sleep=60000
#
# Submission engine configuration
#
# Submission engine class
org.apache.sqoop.submission.engine=org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine
# Number of milliseconds, submissions created before this limit will be removed, default is one day
#org.apache.sqoop.submission.purge.threshold=
# Number of milliseconds for purge thread to sleep, by default one day
#org.apache.sqoop.submission.purge.sleep=
# Number of milliseconds for update thread to sleep, by default 5 minutes
#org.apache.sqoop.submission.update.sleep=
#
# Configuration for Mapreduce submission engine (applicable if it's configured)
#
# Hadoop configuration directory
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/etc/hadoop/conf/
#
# Execution engine configuration
#
org.apache.sqoop.execution.engine=org.apache.sqoop.execution.mapreduce.MapreduceExecutionEngine

Servicemix as service remote debugging

I am running ServiceMix as service using Karaf-wrapper.exe. This exe file uses karaf-wrapper.conf file for configuration. I enabled remote debugging in this conf file. Also I created a environment variable KARAF-DEBUG with value TRUE. but still I am unable to connect it with IntelliJ. System displays "Could not open connection: Connection refused"
Please let me know if I am missing anything.
# ------------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------
#********************************************************************
# Wrapper Properties
#********************************************************************
set.default.KARAF_HOME=..\servicemix
set.default.KARAF_BASE=..\servicemix
set.default.KARAF_DATA=..\servicemix\data
# Java Application
wrapper.working.dir=%KARAF_BASE%
set.JAVA_HOME=c:\tools\java\jdk1.7.0_67
set.M2_HOME=c:\\tools\apache-maven-3.0.3
wrapper.java.command=%JAVA_HOME%\bin\java
wrapper.java.mainclass=org.apache.karaf.shell.wrapper.Main
wrapper.java.classpath.1=%KARAF_BASE%/lib/karaf-wrapper.jar
wrapper.java.classpath.2=%KARAF_HOME%/lib/karaf.jar
wrapper.java.classpath.3=%KARAF_HOME%/lib/karaf-jaas-boot.jar
wrapper.java.classpath.4=%KARAF_BASE%/lib/karaf-wrapper-main.jar
wrapper.java.library.path.1=%KARAF_BASE%/lib/
# Application Parameters. Add parameters as needed starting from 1
#wrapper.app.parameter.1=
# JVM Parameters
# note that n is the parameter number starting from 1.
wrapper.java.additional.1=-Dkaraf.home="%KARAF_HOME%"
wrapper.java.additional.2=-Dkaraf.base="%KARAF_BASE%"
wrapper.java.additional.3=-Dkaraf.data="%KARAF_DATA%"
wrapper.java.additional.4=-Dcom.sun.management.jmxremote
wrapper.java.additional.5=-Dkaraf.startLocalConsole=false
wrapper.java.additional.6=-Dkaraf.startRemoteShell=true
wrapper.java.additional.7=-Djava.endorsed.dirs="%JAVA_HOME%/jre/lib/endorsed;%JAVA_HOME%/lib/endorsed;%KARAF_HOME%/lib/endorsed"
wrapper.java.additional.8=-Djava.ext.dirs="%JAVA_HOME%/jre/lib/ext;%JAVA_HOME%/lib/ext;%KARAF_HOME%/lib/ext"
wrapper.java.additional.9=-XX:MaxPermSize=2048m
# Uncomment to enable jmx
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.port=1616
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.authenticate=false
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.ssl=false
# Uncomment to enable YourKit profiling
#wrapper.java.additional.n=-Xrunyjpagent
# Uncomment to enable remote debugging
wrapper.java.additional.n=-Xdebug -Xnoagent -Djava.compiler=NONE
wrapper.java.additional.n=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=4096
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=4096
#********************************************************************
# Wrapper Logging Properties
#********************************************************************
# Format of output for the console. (See docs for formats)
wrapper.console.format=PM
# Log Level for console output. (See docs for log levels)
wrapper.console.loglevel=INFO
# Log file to use for wrapper output logging.
wrapper.logfile=%KARAF_DATA%/log/wrapper.log
# Format of output for the log file. (See docs for formats)
wrapper.logfile.format=LPTM
# Log Level for log file output. (See docs for log levels)
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
wrapper.logfile.maxsize=10m
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
wrapper.logfile.maxfiles=5
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
#********************************************************************
# Wrapper Windows Properties
#********************************************************************
# Title to use when running as a console
wrapper.console.title=Servicemix
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.ntservice.name=Servicemix
# Display name of the service
wrapper.ntservice.displayname=Servicemix
# Description of the service
wrapper.ntservice.description=Apache Servicemix 5.x
# Service dependencies. Add dependencies as needed starting from 1
wrapper.ntservice.dependency.1=
# Mode in which the service is installed. AUTO_START or DEMAND_START
wrapper.ntservice.starttype=AUTO_START
# Allow the service to interact with the desktop.
wrapper.ntservice.interactive=false
That's very easy to fix,
just make sure you replace the n with the right number.
since your last numbered entry is:
wrapper.java.additional.9=-XX:MaxPermSize=2048m
you need to set the debugging to:
wrapper.java.additional.10=-Xdebug -Xnoagent -Djava.compiler=NONE
wrapper.java.additional.11=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005

Is there a way to start a sonarqube server with an external sonar.properties file?

I want to know if there is a way to start a SonarQube (5.0.1) server with an external sonar.properties and wrapper.conf files.
I am looking at something similar to apache "-f" option -
/apache2/bin/apachectl -f /path/to/httpd.conf
Thanks.
========================================================
As mentioned in the answer below, I tried to reference the properties with environment variables. This works for certain properties. ex. sonar.jdbc.username & sonar.jdbc.password
It did not work for me for as a property value that has multiple environment variables.
Ex. sonar.jdbc.url=jdbc:mysql://${env:MYSQL_HOST}:${env:MYSQL_PORT}/sonar=
?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
Here is the exception I am getting -
2015.03.17 11:48:33 INFO web[o.s.c.p.Database] Create JDBC datasource for= jdbc:mysql://${env:MYSQL_HOST}:${env:MYSQL_PORT}/sonar?useUnicode=3Dtrue&c=
haracterEncoding=3Dutf8&rewriteBatchedStatements=3Dtrue
2015.03.17 11:48:33 ERROR web[o.a.c.c.C.[.[.[/sonar]] Exception sending co= ntext initialized event to listener instance of class org.sonar.server.plat= form.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check = connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
==========================================================
I also tried with having only one env variable -
$echo $MYSQL_DB_URL
jdbc:mysql://devdbXXX:6000/sonar?useUnicode=true
Getting this exception -
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Encountered an error running main: org.sonar.process.MessageException: Bad format of JDBC URL: ${env:MYSQL_DB_URL}
org.sonar.process.MessageException: Bad format of JDBC URL: ${env:MYSQL_DB_URL}
<-- Wrapper Stopped
This works if I hardcode the mysql host url.
Something to do with URL formatting, Still debugging...
In UBUNTU: Yes you can give an external file. If you see the sonar.sh file in sonarqube bin folder
#! /bin/sh
#
# Copyright (c) 1999, 2006 Tanuki Software Inc.
#
# Java Service Wrapper sh script. Suitable for starting and stopping
# wrapped Java applications on UNIX platforms.
#
#-----------------------------------------------------------------------------
# These settings can be modified to fit the needs of your application
# Default values for the Application variables, below.
#
# NOTE: The build for specific applications may override this during the resource-copying
# phase, to fill in a concrete name and avoid the use of the defaults specified here.
DEF_APP_NAME="SonarQube"
DEF_APP_LONG_NAME="SonarQube"
# Application
APP_NAME="${DEF_APP_NAME}"
APP_LONG_NAME="${DEF_APP_LONG_NAME}"
# Wrapper
WRAPPER_CMD="./wrapper"
WRAPPER_CONF="../../conf/wrapper.conf"
# Priority at which to run the wrapper. See "man nice" for valid priorities.
# nice is only used if a priority is specified.
PRIORITY=
# Location of the pid file.
PIDDIR="."
You can define the path of wrapper file here WRAPPER_CONF= and for sonar.properties you can create a file link in sonarqube conf folder and redirect it to the path you have saved the file. Also a tougher option is to edit the above start.sh file to accept these parameters as flags. (Eg -sp for sonar properties and -wc for wrapper conf)
Values in the sonar.properties can be externalized by referencing from environment variables.
sonarqube/5.0.1/conf/sonar.properties header >
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
Looks like this approach needs least file manipulation and solves the problem where I do not want to hardcode the property values as they are changing as per the environment.

Geronimo.out increase too fast

I have built a Grails project on Geronimo. I made own log4j for write some error everyday, it has a small size.
My problem is geronimo.out file increase too fast. It goes to 1Gb in just few days. I tried to disable console appender but it still write to geronimo.out file.
How can I disable that?
Here is my server-log4j.properties:
##
## Licensed to the Apache Software Foundation (ASF) under one or more
## contributor license agreements. See the NOTICE file distributed with
## this work for additional information regarding copyright ownership.
## The ASF licenses this file to You under the Apache License, Version 2.0
## (the "License"); you may not use this file except in compliance with
## the License. You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
## $Rev: 810770 $ $Date: 2009-09-03 11:32:24 +0800 (Thu, 03 Sep 2009) $
##
log4j.rootLogger=INFO
#log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
#log4j.appender.CONSOLE.Threshold=${org.apache.geronimo.log.ConsoleLogLevel}
#log4j.appender.CONSOLE.Target=System.out
#log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
#log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} %-5p [%c{1}] %m%n
#log4j.appender.FILE=org.apache.log4j.RollingFileAppender
#log4j.appender.FILE.Threshold=TRACE
#log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
#log4j.appender.FILE.layout.ConversionPattern=%d{ISO8601} %-5p [%c{1}] %m%n
#
# Note, changing log4j.appender.FILE.append=false will result in logs being
# overwritten without archiving the previous version of the log.
#
#log4j.appender.FILE.append=true
#log4j.appender.FILE.file=${org.apache.geronimo.server.dir}/var/log/geronimo.log
#log4j.appender.FILE.bufferedIO=false
#log4j.appender.FILE.maxBackupIndex=3
#log4j.appender.FILE.maxFileSize=10MB
#
# Example: enable trace logging from CONSOLE appender
#
#log4j.appender.CONSOLE.Threshold=TRACE#org.apache.geronimo.system.logging.log4j.XLevel
#
# Example: enable trace messages from foo.bar category
#
#log4j.logger.foo.bar=TRACE#org.apache.geronimo.system.logging.log4j.XLevel
# Geronimo
#This will help find connection leak problems
#log4j.logger.org.apache.geronimo.connector.outbound=TRACE#org.apache.geronimo.system.logging.log4j.XLevel
log4j.logger.org.apache.geronimo.system.logging.log4j.Log4jService=INFO
#### Eliminate any INFO level output during normal operation -- except the really relevant stuff ####
# We can change the Geronimo code to avoid this, but we have to just adjust the log levels for
# any third-party libraries.
log4j.logger.org.apache.commons.digester=ERROR
log4j.logger.org.apache.jasper.compiler.SmapUtil=WARN
# ActiveMQ
log4j.logger.org.apache.activemq=WARN
log4j.logger.org.apache.activemq.broker.jmx.ManagementContext=ERROR
# Don't need so much info on every web page that's rendered
log4j.logger.org.mortbay=INFO
log4j.logger.org.apache.pluto=INFO
log4j.logger.org.apache.jasper=INFO
# Various Jetty startup/shutdown output
log4j.logger.org.mortbay.http.HttpServer=WARN
log4j.logger.org.mortbay.http.SocketListener=WARN
log4j.logger.org.mortbay.http.ajp.AJP13Listener=WARN
log4j.logger.org.mortbay.util.Container=WARN
log4j.logger.org.mortbay.util.Credential=WARN
log4j.logger.org.mortbay.util.ThreadedServer=WARN
log4j.logger.org.mortbay.jetty.servlet.WebApplicationContext=WARN
log4j.logger.org.mortbay.jetty.context=WARN
# Various Tomcat startup output
log4j.logger.org.apache.catalina.realm.JAASRealm=WARN
log4j.logger.org.apache.catalina.realm.RealmBase=WARN
log4j.logger.org.apache.catalina.loader.WebappLoader=WARN
log4j.logger.org.apache.catalina.startup.Embedded=WARN
log4j.logger.org.apache.catalina.core.StandardEngine=WARN
log4j.logger.org.apache.catalina.core.StandardHost=WARN
log4j.logger.org.apache.jk.common.ChannelSocket=WARN
log4j.logger.org.apache.jk.server.JkMain=WARN
log4j.logger.org.apache.coyote.http11.Http11BaseProtocol=WARN
log4j.logger.org.apache.coyote.http11.Http11Protocol=WARN
log4j.logger.org.apache.catalina.core.ContainerBase=WARN
log4j.logger.org.apache.catalina.core.StandardContext=WARN
log4j.logger.org.apache.tomcat.util.net.SSLImplementation=WARN
# myfaces startup output
log4j.logger.org.apache.myfaces.renderkit.html.HtmlRenderKitImpl=WARN
log4j.logger.org.apache.myfaces.config.FacesConfigurator=WARN
log4j.logger.org.apache.myfaces.webapp.StartupServletContextListener=WARN
log4j.logger.org.apache.myfaces.webapp.StartupServletContextListener=WARN
# emits a spurious warn about null locale during startup of webapps
log4j.logger.org.apache.myfaces.shared_impl.util.LocaleUtils=ERROR
# Emits a spurious WARN during startup on /some-path/* security mappings
log4j.logger.org.apache.catalina.deploy.SecurityCollection=ERROR
# Prints the MBean Server ID
log4j.logger.javax.management.MBeanServerFactory=WARN
# Prints the RMI connection URL
log4j.logger.javax.management.remote.rmi.RMIConnectorServer=WARN
log4j.logger.javax.management.remote.JMXServiceURL=WARN
# Prints various stuff during startup
log4j.logger.org.apache.juddi.registry.RegistryServlet=WARN
# Prints various stuff when the portal is used
log4j.logger.org.apache.pluto.portalImpl.Servlet=WARN
# Prints various stuff when registering portlets for context
log4j.logger.org.apache.pluto.core.PortletContextManager=WARN
# Prints stuff for AJAX calls
log4j.logger.uk.ltd.getahead.dwr.impl.DefaultConfiguration=WARN
log4j.logger.uk.ltd.getahead.dwr.impl.ExecuteQuery=WARN
log4j.logger.uk.ltd.getahead.dwr.util.Logger=WARN
# Prints various stuff when loading mapping descriptors in pluto
log4j.logger.org.exolab.castor.mapping.Mapping=WARN
# Prints various stuff when filtering the requests.
log4j.logger.org.apache.geronimo.console.filter.XSRFHandler=WARN
# Example: enable Axis debug log output
#log4j.logger.org.apache.axis.enterprise=DEBUG
#log4j.logger.org.apache.axis.TIME=DEBUG
#log4j.logger.org.apache.axis.EXCEPTIONS=DEBUG
# Example: enable Axis2 debug log output
#log4j.logger.org.apache.axis2.enterprise=DEBUG
#log4j.logger.de.hunsicker.jalopy.io=DEBUG
#log4j.logger.httpclient.wire.header=DEBUG
#log4j.logger.org.apache.commons.httpclient=DEBUG
# Example: enable OpenJPA debug log output
#log4j.logger.openjpa.Runtime=TRACE
#log4j.logger.openjpa.Enhance=TRACE
#log4j.logger.openjpa.SQL=TRACE
#log4j.logger.openjpa=TRACE
Thanks in advance!
Try to disable the output to the console when logging to file.
Remove stdout from Config.groovy for Log4j like this :
root { info 'file','stdout'}.
to
root { info 'file'}.
Hope it works.

Resources