'atlas.graph.index.search.max-result-set-size' doesn't map to a List object: 150 - hadoop

I tried to setup the sqoop-hook with Atlas Following these steps :
1- Set-up Atlas hook in sqoop-site.xml:
<property>
<name>sqoop.job.data.publish.class</name>
<value>org.apache.atlas.sqoop.hook.SqoopHook</value>
</property>
2- copy contents of folder apache-atlas-sqoop-hook to hook/sqoop
3- copy atlas-application.properties to sqoop-conf-dir
4- copy all jars in /hook/sqoop in sqoop-dir/lib
But when I tried to execute sqoop import command :
sqoop import --connect jdbc:postgresql://server/db --username user -P --table tab --hive-import --create-hive-table
I got the following error:
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Setting atlas.graph.index.search.max-result-set-size = 150
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Setting atlas.graph.index.search.solr.wait-searcher = false
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Property (set to default) atlas.graph.cache.db-cache = true
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Property (set to default) atlas.graph.cache.db-cache-clean-wait = 20
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Property (set to default) atlas.graph.cache.db-cache-size = 0.5
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Property (set to default) atlas.graph.cache.tx-cache-size = 15000
22/08/01 16:44:51 INFO atlas.ApplicationProperties: Property (set to default) atlas.graph.cache.tx-dirty-size = 120
22/08/01 16:44:51 INFO hook.AtlasHook: Failed to load application properties
org.apache.atlas.AtlasException: Failed to load application properties
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:155)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:108)
at org.apache.atlas.hook.AtlasHook.<clinit>(AtlasHook.java:82)
at org.apache.atlas.sqoop.hook.SqoopHook.<clinit>(SqoopHook.java:86)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:46)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:284)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.PostgresqlManager.importTable(PostgresqlManager.java:127)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:520)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: org.apache.commons.configuration.ConversionException: 'atlas.graph.index.search.max-result-set-size' doesn't map to a List object: 150, a java.lang.Integer
at org.apache.commons.configuration.AbstractConfiguration.getList(AbstractConfiguration.java:1144)
at org.apache.commons.configuration.AbstractConfiguration.getList(AbstractConfiguration.java:1109)
at org.apache.commons.configuration.AbstractConfiguration.interpolatedConfiguration(AbstractConfiguration.java:1274)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:150)
... 17 more
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.atlas.sqoop.hook.SqoopHook.<clinit>(SqoopHook.java:86)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:46)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:284)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.PostgresqlManager.importTable(PostgresqlManager.java:127)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:520)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: java.lang.NullPointerException
at org.apache.atlas.hook.AtlasHook.<clinit>(AtlasHook.java:87)
... 15 more
Here is the atlas-application.properties file :
######### Graph Database Configs #########
# Graph Database
#Configures the graph database to use. Defaults to JanusGraph
#atlas.graphdb.backend=org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase
# Graph Storage
# Set atlas.graph.storage.backend to the correct value for your desired storage
# backend. Possible values:
#
# hbase
# cassandra
# embeddedcassandra - Should only be set by building Atlas with -Pdist,embedded-cassandra-solr
# berkeleyje
#
# See the configuration documentation for more information about configuring the various storage backends.
#
atlas.graph.storage.backend=hbase2
atlas.graph.storage.hbase.table=apache_atlas_janus
# atlas.graph.storage.username=
# atlas.graph.storage.password=
#atlas.graph.cache.db-cache=false
atlas.graph.cache.db-cache = true
atlas.graph.cache.db-cache-clean-wait = 20
atlas.graph.cache.db-cache-size = 0.5
atlas.graph.cache.tx-cache-size = 15000
atlas.graph.cache.tx-dirty-size = 120
#Hbase
#For standalone mode , specify localhost
#for distributed mode, specify zookeeper quorum here
atlas.graph.storage.hostname=localhost
atlas.graph.storage.hbase.regions-per-server=1
# Gremlin Query Optimizer
#
# Enables rewriting gremlin queries to maximize performance. This flag is provided as
# a possible way to work around any defects that are found in the optimizer until they
# are resolved.
#atlas.query.gremlinOptimizerEnabled=true
# Delete handler
#
# This allows the default behavior of doing "soft" deletes to be changed.
#
# Allowed Values:
# org.apache.atlas.repository.store.graph.v1.SoftDeleteHandlerV1 - all deletes are "soft" deletes
# org.apache.atlas.repository.store.graph.v1.HardDeleteHandlerV1 - all deletes are "hard" deletes
#
#atlas.DeleteHandlerV1.impl=org.apache.atlas.repository.store.graph.v1.SoftDeleteHandlerV1
# Entity audit repository
#
# This allows the default behavior of logging entity changes to hbase to be changed.
#
# Allowed Values:
# org.apache.atlas.repository.audit.HBaseBasedAuditRepository - log entity changes to hbase
# org.apache.atlas.repository.audit.CassandraBasedAuditRepository - log entity changes to cassandra
# org.apache.atlas.repository.audit.NoopEntityAuditRepository - disable the audit repository
#
atlas.EntityAuditRepository.impl=org.apache.atlas.repository.audit.HBaseBasedAuditRepository
# if Cassandra is used as a backend for audit from the above property, uncomment and set the following
# properties appropriately. If using the embedded cassandra profile, these properties can remain
# commented out.
# atlas.EntityAuditRepository.keyspace=atlas_audit
# atlas.EntityAuditRepository.replicationFactor=1
# Graph Search Index
atlas.graph.index.search.backend=solr5
#Solr
#Solr cloud mode properties
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=localhost:2181
atlas.graph.index.search.solr.zookeeper-connect-timeout=60000
atlas.graph.index.search.solr.zookeeper-session-timeout=60000
atlas.graph.index.search.solr.wait-searcher=false
#Solr http mode properties
#atlas.graph.index.search.solr.mode=http
#atlas.graph.index.search.solr.http-urls=http://localhost:8983/solr
# Solr-specific configuration property
atlas.graph.index.search.max-result-set-size=150
######### Import Configs #########
#atlas.import.temp.directory=/temp/import
######### Notification Configs #########
atlas.notification.embedded=false
atlas.kafka.data=${sys:atlas.home}/data/kafka
atlas.kafka.zookeeper.connect=localhost:9026
atlas.kafka.bootstrap.servers=localhost:9027
atlas.kafka.zookeeper.session.timeout.ms=400
atlas.kafka.zookeeper.connection.timeout.ms=200
atlas.kafka.zookeeper.sync.time.ms=20
atlas.kafka.auto.commit.interval.ms=1000
atlas.kafka.hook.group.id=atlas
atlas.kafka.enable.auto.commit=false
atlas.kafka.auto.offset.reset=earliest
atlas.kafka.session.timeout.ms=30000
atlas.kafka.offsets.topic.replication.factor=1
atlas.kafka.poll.timeout.ms=1000
atlas.notification.create.topics=true
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.notification.log.failed.messages=true
atlas.notification.consumer.retry.interval=500
atlas.notification.hook.retry.interval=1000
# Enable for Kerberized Kafka clusters
#atlas.notification.kafka.service.principal=kafka/_HOST#EXAMPLE.COM
#atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab
## Server port configuration
#atlas.server.http.port=21000
#atlas.server.https.port=21443
######### Security Properties #########
# SSL config
atlas.enableTLS=false
#truststore.file=/path/to/truststore.jks
#cert.stores.credential.provider.path=jceks://file/path/to/credentialstore.jceks
#following only required for 2-way SSL
#keystore.file=/path/to/keystore.jks
# Authentication config
atlas.authentication.method.kerberos=false
atlas.authentication.method.file=true
#### ldap.type= LDAP or AD
atlas.authentication.method.ldap.type=none
#### user credentials file
atlas.authentication.method.file.filename=${sys:atlas.home}/conf/users-credentials.properties
### groups from UGI
#atlas.authentication.method.ldap.ugi-groups=true
######## LDAP properties #########
#atlas.authentication.method.ldap.url=ldap://<ldap server url>:389
#atlas.authentication.method.ldap.userDNpattern=uid={0},ou=People,dc=example,dc=com
#atlas.authentication.method.ldap.groupSearchBase=dc=example,dc=com
#atlas.authentication.method.ldap.groupSearchFilter=(member=uid={0},ou=Users,dc=example,dc=com)
#atlas.authentication.method.ldap.groupRoleAttribute=cn
#atlas.authentication.method.ldap.base.dn=dc=example,dc=com
#atlas.authentication.method.ldap.bind.dn=cn=Manager,dc=example,dc=com
#atlas.authentication.method.ldap.bind.password=<password>
#atlas.authentication.method.ldap.referral=ignore
#atlas.authentication.method.ldap.user.searchfilter=(uid={0})
#atlas.authentication.method.ldap.default.role=<default role>
######### Active directory properties #######
#atlas.authentication.method.ldap.ad.domain=example.com
#atlas.authentication.method.ldap.ad.url=ldap://<AD server url>:389
#atlas.authentication.method.ldap.ad.base.dn=(sAMAccountName={0})
#atlas.authentication.method.ldap.ad.bind.dn=CN=team,CN=Users,DC=example,DC=com
#atlas.authentication.method.ldap.ad.bind.password=<password>
#atlas.authentication.method.ldap.ad.referral=ignore
#atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
#atlas.authentication.method.ldap.ad.default.role=<default role>
######### JAAS Configuration ########
#atlas.jaas.KafkaClient.loginModuleName = com.sun.security.auth.module.Krb5LoginModule
#atlas.jaas.KafkaClient.loginModuleControlFlag = required
#atlas.jaas.KafkaClient.option.useKeyTab = true
#atlas.jaas.KafkaClient.option.storeKey = true
#atlas.jaas.KafkaClient.option.serviceName = kafka
#atlas.jaas.KafkaClient.option.keyTab = /etc/security/keytabs/atlas.service.keytab
#atlas.jaas.KafkaClient.option.principal = atlas/_HOST#EXAMPLE.COM
######### Server Properties #########
atlas.rest.address=http://localhost:21000
# If enabled and set to true, this will run setup steps when the server starts
#atlas.server.run.setup.on.start=false
######### Entity Audit Configs #########
atlas.audit.hbase.tablename=apache_atlas_entity_audit
atlas.audit.zookeeper.session.timeout.ms=1000
atlas.audit.hbase.zookeeper.quorum=localhost:2181
######### High Availability Configuration ########
atlas.server.ha.enabled=false
#### Enabled the configs below as per need if HA is enabled #####
#atlas.server.ids=id1
#atlas.server.address.id1=localhost:21000
#atlas.server.ha.zookeeper.connect=localhost:2181
#atlas.server.ha.zookeeper.retry.sleeptime.ms=1000
#atlas.server.ha.zookeeper.num.retries=3
#atlas.server.ha.zookeeper.session.timeout.ms=20000
## if ACLs need to be set on the created nodes, uncomment these lines and set the values ##
#atlas.server.ha.zookeeper.acl=<scheme>:<id>
#atlas.server.ha.zookeeper.auth=<scheme>:<authinfo>
######### Atlas Authorization #########
atlas.authorizer.impl=simple
atlas.authorizer.simple.authz.policy.file=atlas-simple-authz-policy.json
######### Type Cache Implementation ########
# A type cache class which implements
# org.apache.atlas.typesystem.types.cache.TypeCache.
# The default implementation is org.apache.atlas.typesystem.types.cache.DefaultTypeCache which is a local in-memory type cache.
#atlas.TypeCache.impl=
######### Performance Configs #########
#atlas.graph.storage.lock.retries=10
#atlas.graph.storage.cache.db-cache-time=120000
######### CSRF Configs #########
atlas.rest-csrf.enabled=true
atlas.rest-csrf.browser-useragents-regex=^Mozilla.*,^Opera.*,^Chrome.*
atlas.rest-csrf.methods-to-ignore=GET,OPTIONS,HEAD,TRACE
atlas.rest-csrf.custom-header=X-XSRF-HEADER
############ KNOX Configs ################
#atlas.sso.knox.browser.useragent=Mozilla,Chrome,Opera
#atlas.sso.knox.enabled=true
#atlas.sso.knox.providerurl=https://<knox gateway ip>:8443/gateway/knoxsso/api/v1/websso
#atlas.sso.knox.publicKey=
############ Atlas Metric/Stats configs ################
# Format: atlas.metric.query.<key>.<name>
atlas.metric.query.cache.ttlInSecs=900
#atlas.metric.query.general.typeCount=
#atlas.metric.query.general.typeUnusedCount=
#atlas.metric.query.general.entityCount=
#atlas.metric.query.general.tagCount=
#atlas.metric.query.general.entityDeleted=
#
#atlas.metric.query.entity.typeEntities=
#atlas.metric.query.entity.entityTagged=
#
#atlas.metric.query.tags.entityTags=
######### Compiled Query Cache Configuration #########
# The size of the compiled query cache. Older queries will be evicted from the cache
# when we reach the capacity.
#atlas.CompiledQueryCache.capacity=1000
# Allows notifications when items are evicted from the compiled query
# cache because it has become full. A warning will be issued when
# the specified number of evictions have occurred. If the eviction
# warning threshold <= 0, no eviction warnings will be issued.
#atlas.CompiledQueryCache.evictionWarningThrottle=0
######### Full Text Search Configuration #########
#Set to false to disable full text search.
#atlas.search.fulltext.enable=true
######### Gremlin Search Configuration #########
#Set to false to disable gremlin search.
atlas.search.gremlin.enable=false
########## Add http headers ###########
#atlas.headers.Access-Control-Allow-Origin=*
#atlas.headers.Access-Control-Allow-Methods=GET,OPTIONS,HEAD,PUT,POST
#atlas.headers.<headerName>=<headerValue>
######### UI Configuration ########
atlas.ui.default.version=v1
atlas.hook.sqoop.synchronous=false
atlas.hook.sqoop.numRetries=3
atlas.hook.sqoop.queueSize=10000
Any Solutions Please

Related

Deleted logs are not rewritten to Elasticsearch

I'm using Logstash to read log files and send to Elasticsearch. It works fine in a streaming mode, creating everyday a different index and writing logs in real time.
The problem is, yesterday at 3pm I occasionally deleted the index. It restored automatically and continued writing logs. However, I have lost the logs related to 12am - 3pm.
In order to rewrite the log from the beginning, I deleted the sincedb file, also added ignore_older => 0 in the Logstash configuration. After that, I deleted the index again. But it continues streaming, ignoring old data.
My current configuration of logstash:
input {
file {
path => ["/someDirectory/Logs/20221220-00001.log"]
start_position => "beginning"
tags => ["prod"]
ignore_older => 0
sincedb_path => "/dev/null"
type => "cowrie"
}
}
filter {
grok {
match => ["path", "/var/www/cap/cap-server/Logs/%{GREEDYDATA:index_name}" ]
}
}
output {
elasticsearch {
hosts => "IP:9200"
user => "elastic"
password => "xxxxxxxx"
index => "logstash-log-%{index_name}"
}
}
I would appreciate for any help.
I'm also attaching Elasticsearch configuration:
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
discovery.type: single-node
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
#action.destructive_requires_name: true
Note, that after all configuration changes, logstash and elasticsearch have been restared.

NO alert received on elastalert-test-rule or while executing the rule

I have done setup on windows 10. Getting below output when executing elastalert-test-rule for my rule.
elastalert-test-rule example_rules\example_frequency.yaml --config config.yaml
Would have written the following documents to writeback index (default is elastalert_status):
elastalert_status - {'rule_name': 'Example frequency rule', 'endtime': datetime.datetime(2020, 4, 19, 18, 49, 10, 397745, tzinfo=tzutc()), 'starttime': datetime.datetime(2019, 4, 17, 3, 13, 10, 397745, tzinfo=tzutc()), 'matches': 4, 'hits': 4, '#timestamp': datetime.datetime(2020, 4, 19, 18, 55, 56, 314841, tzinfo=tzutc()), 'time_taken': 405.48910188674927}
However, no alert is triggered.
Please find below contents of config.yaml and example_frequency.yaml
config.yaml
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: example_rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
seconds: 5
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: 127.0.0.1
# The Elasticsearch port
es_port: 9200
# The AWS region to use. Set this when using AWS-managed elasticsearch
#aws_region: us-east-1
# The AWS profile to use. Use this if you are using an aws-cli profile.
# See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
# for details
#profile: test
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
#use_ssl: True
# Verify TLS certificates
#verify_certs: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
#es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# Use SSL authentication with client certificates client_cert must be
# a pem file containing both cert and key for client
#verify_certs: True
#ca_certs: /path/to/cacert.pem
#client_cert: /path/to/client_cert.pem
#client_key: /path/to/client_key.key
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
# Custom logging configuration
# If you want to setup your own logging configuration to log into
# files as well or to Logstash and/or modify log levels, use
# the configuration below and adjust to your needs.
# Note: if you run ElastAlert with --verbose/--debug, the log level of
# the "elastalert" logger is changed to INFO, if not already INFO/DEBUG.
#logging:
# version: 1
# incremental: false
# disable_existing_loggers: false
# formatters:
# logline:
# format: '%(asctime)s %(levelname)+8s %(name)+20s %(message)s'
#
# handlers:
# console:
# class: logging.StreamHandler
# formatter: logline
# level: DEBUG
# stream: ext://sys.stderr
#
# file:
# class : logging.FileHandler
# formatter: logline
# level: DEBUG
# filename: elastalert.log
#
# loggers:
# elastalert:
# level: WARN
# handlers: []
# propagate: true
#
# elasticsearch:
# level: WARN
# handlers: []
# propagate: true
#
# elasticsearch.trace:
# level: WARN
# handlers: []
# propagate: true
#
# '': # root logger
# level: WARN
# handlers:
# - console
# - file
# propagate: false
example_frequency.yaml
# Alert when the rate of events exceeds a threshold
# (Optional)
# Elasticsearch host
# es_host: elasticsearch.example.com
# (Optional)
# Elasticsearch port
# es_port: 14900
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# (Required)
# Rule name, must be unique
name: Example frequency rule
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: frequency
# (Required)
# Index to search, wildcard supported
index: com-*
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 1
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
days: 365
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
filter:
- term:
"log_json.response.statusCode": "404"
# (Required)
# The alert is use when a match is found
alert:
- "email"
# (required, email specific)
# a list of email addresses to send alerts to
email:
- "username#mydomain.com"
realert:
minutes: 0
What is it that i am missing to receive alerts? Neither do i see any error on console.
SMTP configuration in missing, so that is why no alert is being sent.
Please try to include the smtp_host,smtp_port,smtp_ssl and smtp_auth_file in your example_frequency.yaml.
Refer to the document for Email Alert

Not able to access Kibana GUI with http://Ip:5601/

I have installed Elastisearch 2.1.0 and kibana 4.3.0 in single machine.
Kibana.yml Configurations :
# Kibana is served by a back end server. This controls which port to use.
server.port: 5601
# The host to bind the server to.
server.host: "IP"
# A value to use as a XSRF token. This token is sent back to the server on each request
# and required if you want to execute requests from other clients (like curl).
# server.xsrf.token: ""
# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""
# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://IP:9200/"
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"
# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key
# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem
# Set to false to have a complete disregard for the validity of the SSL
# certificate.
elasticsearch.ssl.verify: true
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000
# Set the path to where you would like the process id file to be created.
pid.file: /var/run/kibana.pid
# If you would like to send the log output to a file you can set the path below.
logging.dest: /var/log/kibana/kibana.log
# Set this to true to suppress all logging output.
# logging.silent: false
# Set this to true to suppress all logging output except for error messages.
# logging.quiet: true
# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: true
While I am doing curl -IP:5601 , I am getting this output:
**HTTP/1.1 200 OK
x-app-name: kibana
x-app-version: 4.3.0
cache-control: no-cache
content-type: text/html
content-length: 217
accept-ranges: bytes
Date: Wed, 20 Jan 2016 15:28:35 GMT
Connection: keep-alive
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
</script>
Elasticsearch and kibana both are up and running still I am not able to access Kibana GUI from the browser. It is not displaying the page.
I checked the configurations of elasticsearch.yml too.The host and IP is correct there. Curl command is giving this output for elasticsearch [Command :curl http://IP:9200/]
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
Could anybody tell what could be the issue.
Did you install elasticsearch and kibana on your local machine, I mean your laptop or computer that you are workng on? Or is it running on a separate server?
If you are running it on the same machine that you are accessing the browser, then you could just access it as localhost:port
As your error includes the status
Elasticsearch is still initializing the kibana index, I would recommend you to try the steps mentioned in this page:-
Elasticsearch is still initializing the kibana index

(Openstack) Unable to upload the image to the Image Service

i'm new to Openstack and trying to build my own Openstack-environment.
After following the "OpenStack Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 20" (on Fedora 21), I faced a problem at uploading cirrOS to the Image-Service.
My Openstack-version, refering to this command: "[root#localhost ~]# keystone-manage --version" should be
2014.2.2
After I try to upload the image I get this output:
ADMIN-OPENRC.SH:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=MYPASS
export OS_AUTH_URL=http://controller:35357/v2.0
[root#localhost ~]# source admin-openrc.sh [root#localhost ~]# glance
--debug image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2
--container-format bare --is-public True --progress curl -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-meta-container_format:
bare' -H 'Accept: /' -H 'X-Auth-Token:
{SHA1}726116102202fa50ff0c064ca3cadb86b65fe997' -H 'x-image-meta-size:
13200896' -H 'Connection: keep-alive' -H 'x-image-meta-is_public:
True' -H 'User-Agent: python-glanceclient' -H 'Content-Type:
application/octet-stream' -H 'x-image-meta-disk_format: qcow2' -H
'x-image-meta-name: cirros-0.3.3-x86_64'
http://controller:9292/v1/images [=============================>]
100% Request returned failure status 401. Invalid OpenStack Identity
credentials.
I have to mention that i can get a token from keystone without problems:
[root#localhost ~]# keystone token-get
+-----------+----------------------------------+ | Property |
Value |
+-----------+----------------------------------+ | expires |
2015-07-03T10:26:38Z | | id |
96299e7c355d43a9b8e5b7f47a4d4cdd | | tenant_id |
425de1784b644473b6f1cffe874992c5 | | user_id |
0a85326e1c744d449327894b6a276b5d |
+-----------+----------------------------------+
Here are my config files:
GLANCE-API.CONF & GLANCE-REGISTRY.CONF
connection=mysql://glance:MYPASS#controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = MYPASS
KEYSTONE.CONF
connection=mysql://keystone:MYPASS#controller/keystone </b>
Here is my api.log:
/var/log/glance/api.log
2015-07-03 11:15:00.763 3447 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:01.266 3447 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:02.269 3447 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:04.273 3447 ERROR keystonemiddleware.auth_token [-] HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:04.274 3447 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-07-03 11:15:04.274 3447 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-07-03 11:15:04.327 3447 INFO glance.wsgi.server [-] 192.168.13.92 - - [03/Jul/2015 11:15:04] "POST /v1/images HTTP/1.1" 401 571 3.579172
2015-07-03 11:30:29.083 3446 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:29.587 3446 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:30.591 3446 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:32.595 3446 ERROR keystonemiddleware.auth_token [-] HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:32.595 3446 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-07-03 11:30:32.595 3446 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-07-03 11:30:32.649 3446 INFO glance.wsgi.server [-] 192.168.13.92 - - [03/Jul/2015 11:30:32] "POST /v1/images HTTP/1.1" 401 571 3.581761
Thanks for your effort
Kevin
--------------------------EDIT-----------------------------
Full Glance-Registry.conf:
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose=True
# Show debugging output in logs (sets DEBUG log level output)
#debug=False
# Address to bind the registry server
#bind_host=0.0.0.0
# Port the bind the registry server to
#bind_port=9191
# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
#log_file=/var/log/glance/registry.log
# Backlog requests when creating socket
#backlog=4096
# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle=600
# API to use for accessing data. Default value points to sqlalchemy
# package.
#data_api=glance.db.sqlalchemy.api
# The number of child process workers that will be
# created to service Registry requests. The default will be
# equal to the number of CPUs available. (integer value)
#workers=None
# Enable Registry API versions individually or simultaneously
#enable_v1_registry=True
#enable_v2_registry=True
# Limit the api to return `param_limit_max` items in a call to a container. If
# a larger `limit` query param is provided, it will be reduced to this value.
#api_limit_max=1000
# If a `limit` query param is not provided in an api request, it will
# default to `limit_param_default`
#limit_param_default=25
# Role used to identify an authenticated user as administrator
#admin_role=admin
# Whether to automatically create the database tables.
# Default: False
#db_auto_create=False
# Enable DEBUG log messages from sqlalchemy which prints every database
# query and response.
# Default: False
#sqlalchemy_debug=True
# ================= Syslog Options ============================
# Send logs to syslog (/dev/log) instead of to file specified
# by `log_file`
#use_syslog=False
# Facility to use. If unset defaults to LOG_USER.
#syslog_log_facility=LOG_LOCAL1
# ================= SSL Options ===============================
# Certificate file to use when starting registry server securely
#cert_file=/path/to/certfile
# Private key file to use when starting registry server securely
#key_file=/path/to/keyfile
# CA certificate file to use to verify connecting clients
#ca_file=/path/to/cafile
# ============ Notification System Options =====================
# Driver or drivers to handle sending notifications. Set to
# 'messaging' to send notifications to a message queue.
notification_driver = noop
# Default publisher_id for outgoing notifications.
# default_publisher_id = image.localhost
# Messaging driver used for 'messaging' notifications driver
# rpc_backend = 'rabbit'
# Configuration options if sending notifications via rabbitmq (these are
# the defaults)
#rabbit_host=localhost
#rabbit_port=5672
#rabbit_use_ssl=false
#rabbit_userid=guest
#rabbit_password=guest
#rabbit_virtual_host=/
#rabbit_notification_exchange=glance
#rabbit_notification_topic=notifications
#rabbit_durable_queues=False
# Configuration options if sending notifications via Qpid (these are
# the defaults)
#qpid_notification_exchange=glance
#qpid_notification_topic=notifications
#qpid_hostname=localhost
#qpid_port=5672
#qpid_username=
#qpid_password=
#qpid_sasl_mechanisms=
#qpid_reconnect_timeout=0
#qpid_reconnect_limit=0
#qpid_reconnect_interval_min=0
#qpid_reconnect_interval_max=0
#qpid_reconnect_interval=0
#qpid_heartbeat=5
# Set to 'ssl' to enable SSL
#qpid_protocol=tcp
#qpid_tcp_nodelay=True
# ================= Database Options ==========================
[database]
# The file name to use with SQLite (string value)
#sqlite_db=glance.sqlite
# If True, SQLite uses synchronous mode (boolean value)
#sqlite_synchronous=True
# The backend to use for db (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend=sqlalchemy
# The SQLAlchemy connection string used to connect to the
# database (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
connection = mysql://glance:MYPASS#controller/glance
# The SQL mode to be used for MySQL sessions. This option,
# including the default, overrides any server-set SQL mode. To
# use whatever SQL mode is set by the server configuration,
# set this to no value. Example: mysql_sql_mode= (string
# value)
#mysql_sql_mode=TRADITIONAL
# Timeout before idle sql connections are reaped (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout=3600
# Minimum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size=1
# Maximum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size=<None>
# Maximum db connection retries during startup. (setting -1
# implies an infinite retry count) (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries=10
# Interval between retries of opening a sql connection
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval=10
# If set, use this value for max_overflow with sqlalchemy
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow=<None>
# Verbosity of SQL debugging information. 0=None,
# 100=Everything (integer value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug=0
# Add python stack traces to SQL as comment strings (boolean
# value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace=False
# If set, use this value for pool_timeout with sqlalchemy
# (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout=<None>
# Enable the experimental use of database reconnect on
# connection lost (boolean value)
#use_db_reconnect=False
# seconds between db connection retries (integer value)
#db_retry_interval=1
# Whether to increase interval between db connection retries,
# up to db_max_retry_interval (boolean value)
#db_inc_retry_interval=True
# max seconds between db connection retries, if
# db_inc_retry_interval is enabled (integer value)
#db_max_retry_interval=10
# maximum db connection retries before error is raised.
# (setting -1 implies an infinite retry count) (integer value)
#db_max_retries=20
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = MYPASS
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file=/usr/share/glance/glance-registry-dist-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-registry-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone
[profiler]
# If False fully disable profiling feature.
#enabled=False
# If False doesn't trace SQL requests.
#trace_sqlalchemy=False
Glance-Api.conf:
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file=/usr/share/glance/glance-api-dist-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone
Kevin,
All your configs look fine. Here is what I would suggest you to do
1) Run glance image-list and see if you get anything
2) Did you assign the admin role correctly to glance user "keystone user-role-add --user glance --tenant service --role admin"?
3) Did you run source admin-openrc.sh before running glance create?
HTH
Regards
Ashish

unable to show Global Permissions and new LDAP users not able to login

we have upgraded from 4.0 to 4.5.4 SonarQube. When a new ldap user is trying to login or an administrator tries to show Global Permissions in the settings menu a JDBCError occurs.
Here the error, that occurs, when I want to show global permissions:
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 1ms Executed SQL: SELECT role FROM "group_roles" WHERE (resource_id is null and (group_id is null or group_id in(2,1,
3)))
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 0ms Executed SQL: SELECT * FROM "user_roles" WHERE ("user_roles".user_id = 2)
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 1ms Executed SQL: SELECT "user_roles"."id" AS t0_r0, "user_roles"."user_id" AS t0_r1, "user_roles"."resource_id" AS t
0_r2, "user_roles"."role" AS t0_r3, "users"."id" AS t1_r0, "users"."name" AS t1_r1, "users"."password" AS t1_r2, "users"."email" AS t1_r3, "users"."created" AS t1_r4, "users"."fullna
me" AS t1_r5, "users"."creationdate" AS t1_r6, "users"."disabled" AS t1_r7, "users"."lastactivation" AS t1_r8, "users"."link" AS t1_r9, "users"."accountreactivation" AS t1_r10, "user
s"."category" AS t1_r11, "users"."email" AS t1_r12, "users"."firstname" AS t1_r13, "users"."lastname" AS t1_r14, "users"."login" AS t1_r15, "users"."password" AS t1_r16, "users"."pho
neno" AS t1_r17, "users"."warehouseholderno" AS t1_r18, "users"."login" AS t1_r19, "users"."name" AS t1_r20, "users"."email" AS t1_r21, "users"."crypted_password" AS t1_r22, "users".
"salt" AS t1_r23, "users"."created_at" AS t1_r24, "users"."updated_at" AS t1_r25, "users"."remember_token" AS t1_r26, "users"."remember_token_expires_at" AS t1_r27, "users"."active"
AS t1_r28 FROM "user_roles" LEFT OUTER JOIN "users" ON "users".id = "user_roles".user_id WHERE ("user_roles"."role" = 'profileadmin' AND "user_roles"."resource_id" IS NULL AND "users
"."active" = 't')
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 1ms Executed SQL: select 1
2015.04.23 17:35:49 ERROR http-bio-0.0.0.0-9100-exec-5 web[o.s.s.ui.JRubyFacade] Fail to render: http://build:9100/roles/global
ActiveRecord::JDBCError: ERROR: column users.password does not exist
Position: 184: SELECT "user_roles"."id" AS t0_r0, "user_roles"."user_id" AS t0_r1, "user_roles"."resource_id" AS t0_r2, "user_roles"."role" AS t0_r3, "users"."id" AS t1_r0, "users"
."name" AS t1_r1, "users"."password" AS t1_r2, "users"."email" AS t1_r3, "users"."created" AS t1_r4, "users"."fullname" AS t1_r5, "users"."creationdate" AS t1_r6, "users"."disabled"
AS t1_r7, "users"."lastactivation" AS t1_r8, "users"."link" AS t1_r9, "users"."accountreactivation" AS t1_r10, "users"."category" AS t1_r11, "users"."email" AS t1_r12, "users"."first
name" AS t1_r13, "users"."lastname" AS t1_r14, "users"."login" AS t1_r15, "users"."password" AS t1_r16, "users"."phoneno" AS t1_r17, "users"."warehouseholderno" AS t1_r18, "users"."l
ogin" AS t1_r19, "users"."name" AS t1_r20, "users"."email" AS t1_r21, "users"."crypted_password" AS t1_r22, "users"."salt" AS t1_r23, "users"."created_at" AS t1_r24, "users"."updated
_at" AS t1_r25, "users"."remember_token" AS t1_r26, "users"."remember_token_expires_at" AS t1_r27, "users"."active" AS t1_r28 FROM "user_roles" LEFT OUTER JOIN "users" ON "users".id
= "user_roles".user_id WHERE ("user_roles"."role" = 'profileadmin' AND "user_roles"."resource_id" IS NULL AND "users"."active" = 't')
On line #37 of app/views/roles/global.html.erb
34: <%= message("global_permissions.#{permission_key}.desc") -%>
35: 36: 37: <span id="users-<%= permission_key.parameterize -%>"><%= users(permission_key).map(&:name).join(', ') -%>
38: (<%= link_to_edit_roles_permission_form(message('select'), permission_key, nil, "select-users-#{permission_key}") -%>)<br/>
39: 40:
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:227:in `log'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log'
gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:183:in `execute'
gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:275:in `select'
gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:202:in `jdbc_select_all'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all_with_query_cache'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all_with_query_cache'
gems/gems/activerecord-2.3.15/lib/active_record/associations.rb:1624:in `select_all_rows'
gems/gems/activerecord-2.3.15/lib/active_record/associations.rb:1401:in `find_with_associations'
org/jruby/RubyKernel.java:1268:in `catch'
gems/gems/activerecord-2.3.15/lib/active_record/associations.rb:1399:in `find_with_associations'
gems/gems/activerecord-2.3.15/lib/active_record/base.rb:1580:in `find_every'
gems/gems/activerecord-2.3.15/lib/active_record/base.rb:619:in `find'
gems/gems/activerecord-2.3.15/lib/active_record/base.rb:639:in `all'
app/helpers/roles_helper.rb:24:in `users'
app/views/roles/global.html.erb:37
org/jruby/RubyArray.java:1613:in `each'
app/views/roles/global.html.erb:27
org/jruby/RubyKernel.java:2227:in `send'
gems/gems/actionpack-2.3.15/lib/action_view/renderable.rb:34:in `render'
gems/gems/actionpack-2.3.15/lib/action_view/base.rb:306:in `with_template'
gems/gems/actionpack-2.3.15/lib/action_view/renderable.rb:30:in `render'
and here the error that occurs while login:
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 3ms Executed SQL: SELECT * FROM "users" WHERE ("users"."login" = 'a428774') LIMIT 1
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 6ms Executed SQL: SELECT * FROM "groups" WHERE ("groups"."name" = 'sonar-users') LIMIT 1
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 53ms Executed SQL: SELECT attr.attname, seq.relname FROM pg_class seq, pg_attribute attr, pg_depend dep, pg_namespa
ce name, pg_constraint cons WHERE seq.oid = dep.objid AND seq.relkind = 'S' AND attr.attrelid = dep.refobjid AND attr.attnum = dep.refobjsubid AND attr.attrelid = cons.conrelid AND a
ttr.attnum = cons.conkey[1] AND cons.contype = 'p' AND dep.refobjid = '"users"'::regclass
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 2ms Executed SQL: INSERT INTO "users" ("name", "password", "email", "created", "fullname", "creationdate", "disabled
", "lastactivation", "link", "accountreactivation", "category", "firstname", "lastname", "login", "phoneno", "warehouseholderno", "crypted_password", "salt", "created_at", "updated_a
t", "remember_token", "remember_token_expires_at", "active") VALUES('MUSTERMANN, Marc', NULL, 'marc.mustermann#company.com', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, 'a4
28774', NULL, NULL, 'e285ed59429a05e40b250577fee291addedd09a1', '708e1dc34acb880756709e8db0d66999672478dc', '2015-04-24 10:35:28.400000', '2015-04-24 10:35:28.400000', NULL, NULL, 't
') RETURNING "id"
2015.04.24 10:35:28 ERROR http-bio-0.0.0.0-9100-exec-22 web[o.s.s.ui.JRubyFacade] Fail to render: http://build:9100/sessions/login
ActiveRecord::JDBCError: ERROR: column "password" of relation "users" does not exist
Position: 30: INSERT INTO "users" ("name", "password", "email", "created", "fullname", "creationdate", "disabled", "lastactivation", "link", "accountreactivation", "category", "fir
stname", "lastname", "login", "phoneno", "warehouseholderno", "crypted_password", "salt", "created_at", "updated_at", "remember_token", "remember_token_expires_at", "active") VALUES(
'MUSTERMANN, Marc', NULL, 'marc.mustermann#company.com', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, 'a428774', NULL, NULL, 'e285ed59429a05e40b250577fee291addedd09a1', '708
e1dc34acb880756709e8db0d66999672478dc', '2015-04-24 10:35:28.400000', '2015-04-24 10:35:28.400000', NULL, NULL, 't') RETURNING "id"
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:227:in `log'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:183:in `execute'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:275:in `select'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:212:in `select_one'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/database_statements.rb:19:in `selec
t_value'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/postgresql/adapter.rb:266:in `pg_insert'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:26:in `insert_with_q
uery_dirty'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2967:in `create'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/timestamp.rb:53:in `create_with_timestamps'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/callbacks.rb:266:in `create_with_callbacks'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2933:in `create_or_update'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/callbacks.rb:250:in `create_or_update_with_callbacks'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2583:in `save'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/validations.rb:1089:in `save_with_validation'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/dirty.rb:79:in `save_with_dirty'
org/jruby/RubyKernel.java:2227:in `send'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:229:in `with_transaction_returning_status'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `tran
saction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:182:in `transaction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:228:in `with_transaction_returning_status'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:196:in `save_with_transactions'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:208:in `rollback_active_record_state!'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:196:in `save_with_transactions'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:157:in `synchronize'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `tran
saction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:182:in `transaction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:122:in `synchronize'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:79:in `external_auth'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:101:in `auth'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:56:in `authenticate?'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:236:in `authenticate'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/app/controllers/sessions_controller.rb:30:in `login'
org/jruby/RubyKernel.java:2223:in `send'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/actionpack-2.3.15/lib/action_controller/base.rb:1333:in `perform_action'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/actionpack-2.3.15/lib/action_controller/filters.rb:617:in `call_filters'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/actionpack-2.3.15/lib/action_controller/filters.rb:610:in `perform_action_with_filters'
Here the plugins we use:
-rw-r--r-- 1 tsonar jtest 105 23. Apr 09:30 README.txt
-rw-r--r-- 1 tsonar jtest 2404852 23. Apr 09:31 sonar-checkstyle-plugin-2.3.jar
-rw-r--r-- 1 tsonar jtest 10325 23. Apr 09:30 sonar-cobertura-plugin-1.6.3.jar
-rw-r--r-- 1 tsonar jtest 6228395 23. Apr 09:31 sonar-findbugs-plugin-3.1.jar
-rw-r--r-- 1 tsonar jtest 2507184 23. Apr 09:31 sonar-java-plugin-3.1.jar
-rw-r--r-- 1 tsonar jtest 30646 23. Apr 09:31 sonar-ldap-plugin-1.4.jar
-rw-r--r-- 1 tsonar jtest 3154834 23. Apr 12:33 sonar-pdfreport-plugin-1.4.jar
-rw-r--r-- 1 tsonar jtest 3568440 23. Apr 09:31 sonar-pmd-plugin-2.3.jar
-rw-r--r-- 1 tsonar jtest 15342 23. Apr 09:31 sonar-useless-code-tracker-plugin-1.0.jar
-rw-r--r-- 1 tsonar jtest 340834 23. Apr 09:31 sonar-web-plugin-2.3.jar
-rw-r--r-- 1 tsonar jtest 13488 23. Apr 09:30 sonar-widget-lab-plugin-1.6.jar
For more information please look at the sonar.properties.
thanks in advance
Regards
Gaby
sonar.properties
# This file must contain only ISO 8859-1 characters.
# See http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Properties.html#load(java.io.InputStream)
#
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
sonar.jdbc.username=sonar
sonar.jdbc.password=sonartest
#----- Embedded Database (default)
# It does not accept connections from remote hosts, so the
# server and the analyzers must be executed on the same host.
#sonar.jdbc.url=jdbc:h2:tcp://localhost:9092/sonar
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.x
#sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
#----- Oracle 10g/11g
# - Only thin client is supported
# - Only versions 11.2.* of Oracle JDBC driver are supported, even if connecting to lower Oracle versions.
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.codehaus.org/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.codehaus.org/browse/SONAR-5000
sonar.jdbc.url=jdbc:postgresql://sonartest:5432/test
#----- Microsoft SQLServer 2005/2008
# Only the distributed jTDS driver is supported.
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
#----- Connection pool settings
sonar.jdbc.maxActive=20
sonar.jdbc.maxIdle=5
sonar.jdbc.minIdle=2
sonar.jdbc.maxWait=5000
sonar.jdbc.minEvictableIdleTimeMillis=600000
sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default its heap size is 768Mb.
# Use the following property to customize JVM options. Enabling the HotSpot Server VM
# mode (-server) is recommended.
# Note that the option -Dfile.encoding=UTF-8 is mandatory.
#sonar.web.javaOpts=-Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings
# like -Djava.awt.headless=true
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Disabled when value is -1.
sonar.web.port=9100
# TCP port for incoming HTTPS connections. Disabled when value is -1 (default).
#sonar.web.https.port=-1
#
# Recommendation for HTTPS
# SonarQube natively supports HTTPS. However using a reverse proxy
# infrastructure is the recommended way to set up your SonarQube installation
# on production environments which need to be highly secured.
# This allows to fully master all the security parameters that you want.
# HTTPS - the alias used to for the server certificate in the keystore.
# If not specified the first key read in the keystore is used.
#sonar.web.https.keyAlias=
# HTTPS - the password used to access the server certificate from the
# specified keystore file. The default value is "changeit".
#sonar.web.https.keyPass=changeit
# HTTPS - the pathname of the keystore file where is stored the server certificate.
# By default, the pathname is the file ".keystore" in the user home.
# If keystoreType doesn't need a file use empty value.
#sonar.web.https.keystoreFile=
# HTTPS - the password used to access the specified keystore file. The default
# value is the value of sonar.web.https.keyPass.
#sonar.web.https.keystorePass=
# HTTPS - the type of keystore file to be used for the server certificate.
# The default value is JKS (Java KeyStore).
#sonar.web.https.keystoreType=JKS
# HTTPS - the name of the keystore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the keystore type is used (see sonar.web.https.keystoreType).
#sonar.web.https.keystoreProvider=
# HTTPS - the pathname of the truststore file which contains trusted certificate authorities.
# By default, this would be the cacerts file in your JRE.
# If truststoreFile doesn't need a file use empty value.
#sonar.web.https.truststoreFile=
# HTTPS - the password used to access the specified truststore file.
#sonar.web.https.truststorePass=
# HTTPS - the type of truststore file to be used.
# The default value is JKS (Java KeyStore).
#sonar.web.https.truststoreType=JKS
# HTTPS - the name of the truststore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the truststore type is used (see sonar.web.https.truststoreType).
#sonar.web.https.truststoreProvider=
# HTTPS - whether to enable client certificate authentication.
# The default is false (client certificates disabled).
# Other possible values are 'want' (certificates will be requested, but not required),
# and 'true' (certificates are required).
#sonar.web.https.clientAuth=false
# HTTPS - comma separated list of encryption ciphers to support for HTTPS connections.
# If specified, only the ciphers that are listed and supported by the SSL implementation will be used.
# By default, the default ciphers for the JVM will be used. Note that this usually means that the weak
# export grade ciphers, for instance RC4, will be included in the list of available ciphers.
# The ciphers are specified using the JSSE cipher naming convention (see
# https://www.openssl.org/docs/apps/ciphers.html)
# Example: sonar.web.https.ciphers=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
#sonar.web.https.ciphers=
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50 for each
# enabled connector.
#sonar.web.http.maxThreads=50
#sonar.web.https.maxThreads=50
# The minimum number of threads always kept running. The default value is 5 for each
# enabled connector.
#sonar.web.http.minThreads=5
#sonar.web.https.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25 for each enabled connector.
#sonar.web.http.acceptCount=25
#sonar.web.https.acceptCount=25
# Access logs are generated in the file logs/access.log. This file is rolled over when it's 5Mb.
# An archive of 3 files is kept in the same directory.
# Access logs are enabled by default.
#sonar.web.accessLogs.enable=true
# TCP port for incoming AJP connections. Disabled if value is -1. Disabled by default.
#sonar.ajp.port=-1
#--------------------------------------------------------------------------------------------------
# SEARCH INDEX
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process.
# JVM options. Note that enabling the HotSpot Server VM mode (-server) is recommended.
#sonar.search.javaOpts=-Xmx256m -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings
# like -Djava.awt.headless=true
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# This port must be private and must not be exposed to the Internet.
sonar.search.port=9102
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request http://update.sonarsource.org
# It is enabled by default.
sonar.updatecenter.activate=true
# HTTP proxy (default none)
http.proxyHost=proxy
http.proxyPort=4711
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# proxy authentication. The 2 following properties are used for HTTP and SOCKS proxies.
http.proxyUser=proxyuser
http.proxyPassword=proxypwd
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of information displayed in the logs: NONE (default), BASIC (functional information)
# and FULL (functional and technical details)
#sonar.log.profilingLevel=NONE
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Uncomment to enable the Elasticsearch HTTP connector, so that ES can be directly requested through
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=9010
#-------------------
# Sonar LDAP Plugin
#-------------------
# LDAP configuration
# General Configuration
sonar.security.realm=LDAP
sonar.security.savePassword=true
sonar.authenticator.createUsers=true
sonar.security.updateUserAttributes=true
sonar.security.localUsers=admin,analysers
ldap.url=ldap://ldap.company.com
# (optional) Bind DN is the username of an LDAP user to connect (or bind) with.
ldap.bindDn: cn=ldap,ou=oudaten,DC=company,DC=com
# (optional) Bind Password is the password of the user to connect with.
ldap.bindPassword: pwdldap
# User Configuration
ldap.user.baseDn=CN=Users, DC=company,DC=com
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
#ldap.group.baseDn=ou=Groups,dc=sonarsource,dc=com
#ldap.group.request=(&(objectClass=posixGroup)(memberUid={uid}))
# -------------------------------------------------------------------------
# Properties aelterer Versionen des LDAP-Plugins
# -------------------------------------------------------------------------
# Login Attribute is the attribute in LDAP holding the userâ??s login.
# Default is â??uidâ??. Set â??sAMAccountNameâ?? for Microsoft Active Directory
#ldap.user.loginAttribute: sAMAccountName
# Object class of LDAP users.
# Default is 'inetOrgPerson'. Set â??userâ?? for Microsoft Active Directory.
#ldap.user.objectClass: user
sonar.log.profilingLevel=FULL
I have found the cause. The database schema must be the only schema in the database. Otherwise this error occurs.

Resources