Not able to access Kibana GUI with http://Ip:5601/ - elasticsearch

I have installed Elastisearch 2.1.0 and kibana 4.3.0 in single machine.
Kibana.yml Configurations :
# Kibana is served by a back end server. This controls which port to use.
server.port: 5601
# The host to bind the server to.
server.host: "IP"
# A value to use as a XSRF token. This token is sent back to the server on each request
# and required if you want to execute requests from other clients (like curl).
# server.xsrf.token: ""
# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""
# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://IP:9200/"
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"
# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key
# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem
# Set to false to have a complete disregard for the validity of the SSL
# certificate.
elasticsearch.ssl.verify: true
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000
# Set the path to where you would like the process id file to be created.
pid.file: /var/run/kibana.pid
# If you would like to send the log output to a file you can set the path below.
logging.dest: /var/log/kibana/kibana.log
# Set this to true to suppress all logging output.
# logging.silent: false
# Set this to true to suppress all logging output except for error messages.
# logging.quiet: true
# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: true
While I am doing curl -IP:5601 , I am getting this output:
**HTTP/1.1 200 OK
x-app-name: kibana
x-app-version: 4.3.0
cache-control: no-cache
content-type: text/html
content-length: 217
accept-ranges: bytes
Date: Wed, 20 Jan 2016 15:28:35 GMT
Connection: keep-alive
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
</script>
Elasticsearch and kibana both are up and running still I am not able to access Kibana GUI from the browser. It is not displaying the page.
I checked the configurations of elasticsearch.yml too.The host and IP is correct there. Curl command is giving this output for elasticsearch [Command :curl http://IP:9200/]
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
Could anybody tell what could be the issue.

Did you install elasticsearch and kibana on your local machine, I mean your laptop or computer that you are workng on? Or is it running on a separate server?
If you are running it on the same machine that you are accessing the browser, then you could just access it as localhost:port

As your error includes the status
Elasticsearch is still initializing the kibana index, I would recommend you to try the steps mentioned in this page:-
Elasticsearch is still initializing the kibana index

Related

Deleted logs are not rewritten to Elasticsearch

I'm using Logstash to read log files and send to Elasticsearch. It works fine in a streaming mode, creating everyday a different index and writing logs in real time.
The problem is, yesterday at 3pm I occasionally deleted the index. It restored automatically and continued writing logs. However, I have lost the logs related to 12am - 3pm.
In order to rewrite the log from the beginning, I deleted the sincedb file, also added ignore_older => 0 in the Logstash configuration. After that, I deleted the index again. But it continues streaming, ignoring old data.
My current configuration of logstash:
input {
file {
path => ["/someDirectory/Logs/20221220-00001.log"]
start_position => "beginning"
tags => ["prod"]
ignore_older => 0
sincedb_path => "/dev/null"
type => "cowrie"
}
}
filter {
grok {
match => ["path", "/var/www/cap/cap-server/Logs/%{GREEDYDATA:index_name}" ]
}
}
output {
elasticsearch {
hosts => "IP:9200"
user => "elastic"
password => "xxxxxxxx"
index => "logstash-log-%{index_name}"
}
}
I would appreciate for any help.
I'm also attaching Elasticsearch configuration:
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
discovery.type: single-node
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
#action.destructive_requires_name: true
Note, that after all configuration changes, logstash and elasticsearch have been restared.

Enabling SSL but still collection shard base_url showing http communication

I am new in SOLR 8.11.2 and trying to enable SSL and authentication but when I follow the manual all start working but communication between nodes and shard is still in HTTP.
https://127.0.0.1:8981/solr/admin/collections?action=CLUSTERSTATUS&indent=on
{ "responseHeader":{ "status":0, "QTime":4}, "cluster":{ "collections":{ ".system":{ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"80000000-7fffffff", "state":"active", "replicas":{ "core_node3":{ "core":".system_shard1_replica_n1", "base_url":"http://solr3:8984/solr", "node_name":"solr3:8984_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node4":{ "core":".system_shard1_replica_n2", "base_url":"http://solr1:8984/solr", "node_name":"solr1:8984_solr", "state":"active", "type":"NRT", "force_set_state":"false"}}}}, "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0", "znodeVersion":6, "configName":".system"}}, "properties":{"urlScheme":"https"}, "live_nodes":["solr2:8984_solr", "solr1:8984_solr", "solr3:8984_solr"]}}
my environment settings:
SOLR_SSL_ENABLED: 'true'
SOLR_SSL_KEY_STORE: /etc/solr-ssl.keystore.jks
SOLR_SSL_KEY_STORE_PASSWORD: $SOLR_SECRET
SOLR_SSL_TRUST_STORE: /etc/solr-ssl.keystore.jks
SOLR_SSL_TRUST_STORE_PASSWORD: $SOLR_SECRET # Require clients to authenticate
SOLR_SSL_NEED_CLIENT_AUTH: 'false' # Enable clients to authenticate (but not require)
SOLR_SSL_WANT_CLIENT_AUTH: 'false' # Define Key Store type if necessary
SOLR_SSL_KEY_STORE_TYPE: JKS
SOLR_SSL_TRUST_STORE_TYPE: JKS SOLR_SSL_CHECK_PEER_NAME: 'false'
Do i miss anything?

KIbana FATAL TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL

Hi, my kibana dont want run for this error but for me conf is ok, a dont see there nothing without ""
KIbana FATAL TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL.
mayby you have solusions ?
thanks for help ;P
## Basic
server.port: 5601
server.host: "0.0.0.0"
server.ssl.enabled: true
server.ssl.certificate: "/etc/kibana/ssl/kibana.crt"
server.ssl.key: "/etc/kibana/ssl/kibana.key"
server.ssl.supportedProtocols: ["TLSv1.2", "TLSv1.3"]
#server.basePath: "192.168.3.129"
#server.rewriteBasePath: false
server.maxPayloadBytes: 2097152
## Logger settings
#logging.silent: false
#logging.quiet: true
#logging.verbose: true
#logging.dest: "/path/to/logfile"
#elasticsearch.logQueries: true
## The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
elasticsearch.hosts: ["http://127.0.0.1:8000"]
elasticsearch.username: "logserver"
elasticsearch.password: "logserver"
elasticsearch.requestTimeout: 40000
elasticsearch.shardTimeout: 30000
#elasticsearch.preserveHost: true
#kibana.index: ".kibana"
#login.cookieKeepAlive: true
#login.isSameSite: "Strict"
#login.isSecure: true
## Elasticsearch trafic encryption
#elasticsearch.ssl.verificationMode: full
#elasticsearch.ssl.certificate: "/etc/elasticsearch/ssl/hostname.crt"
#elasticsearch.ssl.key: "/etc/elasticsearch/ssl/hostname.key"
#elasticsearch.ssl.certificateAuthorities: "/etc/elasticsearch/ssl/certificate_authority.crt"
#elasticsearch.requestHeadersWhitelist: [ authorization ]
#elasticsearch.customHeaders: {}
## Elastfilter
#elastfilter.refreshtimeout: 1000000
#elastfilter.proxytimeout: 6000000
#elastfilter.bodySizeLimit: 300
#elastfilter.url: "http://127.0.0.1:9200"
not some code
It looks like your post is mostly code; please add some more details.##
##
## Superuser & supergroup
#elastfilter.admin: "logserver"
#elastfilter.role: "admin"
## Elastfilter plugin
#elastfilter.username: "logserver"
#elastfilter.password: "logserver"
## Scheduler plugin
#elastscheduler.commandpath: "/opt/ai/bin"
#elastscheduler.removefromcommand: ['\.\.', '|', '\$']
#elastscheduler.username: "scheduler"
#elastscheduler.password: "scheduler"
## Cerebro plugin
#cerebro.user: "logserver"
#cerebro.password: "logserver"
#cerebro.port: 5602
#cerebro.clustername: "logserver"
## AD/RADIUS/LDAP plugin
#login.sso_enabled: false
#login.radius_enabled: false
## Agents plugin
#agents.repository: "/path/to/repo/dir"
## Archive, default /usr/share/kibana/plugins/archive/archives/
#archive.archivefolderpath: "/path/to/archive/files"
#archive.compressionOptions: ["-T0", "-22", "--ultra", "--zstd=wlog=23,clog=23,hlog=22,slog=6,mml=3,tlen=48,strat=6"]
## Network Probe plugin
network-probe.enabled: false
#network-probe.certificateAuthorities: ["/example/rootCA1.crt", "/example/rootCA2.crt"]
#network-probe.clientCertificate: "/example/cert.crt"
#network-probe.clientCertificateKey: "/example/cert_key.key"
#network-probe.clientCertificateKeyPassword: "password"
## Alerts plugin
#alerts.enabled: true
## DevTools plugin
console.enabled: true
## Timelion plugin
#timelion.enabled: true
## Wazuh (changing this will rebuild kibana bundles and takes up to 4 min)
#wazuh.enabled: true
## Wiki.js plugin
#login.wiki_port: 5603
#login.wiki_protocol: "https"
## Home Page settings
kibana.defaultAppId: "discover"
## Telemetry restrictions
#telemetry.enabled: false
#telemetry.optIn: false
#telemetry.allowChangingOptInStatus: false
enter image description here
not some code
It looks like your post is mostly code; please add some more details.##

NO alert received on elastalert-test-rule or while executing the rule

I have done setup on windows 10. Getting below output when executing elastalert-test-rule for my rule.
elastalert-test-rule example_rules\example_frequency.yaml --config config.yaml
Would have written the following documents to writeback index (default is elastalert_status):
elastalert_status - {'rule_name': 'Example frequency rule', 'endtime': datetime.datetime(2020, 4, 19, 18, 49, 10, 397745, tzinfo=tzutc()), 'starttime': datetime.datetime(2019, 4, 17, 3, 13, 10, 397745, tzinfo=tzutc()), 'matches': 4, 'hits': 4, '#timestamp': datetime.datetime(2020, 4, 19, 18, 55, 56, 314841, tzinfo=tzutc()), 'time_taken': 405.48910188674927}
However, no alert is triggered.
Please find below contents of config.yaml and example_frequency.yaml
config.yaml
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: example_rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
seconds: 5
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: 127.0.0.1
# The Elasticsearch port
es_port: 9200
# The AWS region to use. Set this when using AWS-managed elasticsearch
#aws_region: us-east-1
# The AWS profile to use. Use this if you are using an aws-cli profile.
# See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
# for details
#profile: test
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
#use_ssl: True
# Verify TLS certificates
#verify_certs: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
#es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# Use SSL authentication with client certificates client_cert must be
# a pem file containing both cert and key for client
#verify_certs: True
#ca_certs: /path/to/cacert.pem
#client_cert: /path/to/client_cert.pem
#client_key: /path/to/client_key.key
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
# Custom logging configuration
# If you want to setup your own logging configuration to log into
# files as well or to Logstash and/or modify log levels, use
# the configuration below and adjust to your needs.
# Note: if you run ElastAlert with --verbose/--debug, the log level of
# the "elastalert" logger is changed to INFO, if not already INFO/DEBUG.
#logging:
# version: 1
# incremental: false
# disable_existing_loggers: false
# formatters:
# logline:
# format: '%(asctime)s %(levelname)+8s %(name)+20s %(message)s'
#
# handlers:
# console:
# class: logging.StreamHandler
# formatter: logline
# level: DEBUG
# stream: ext://sys.stderr
#
# file:
# class : logging.FileHandler
# formatter: logline
# level: DEBUG
# filename: elastalert.log
#
# loggers:
# elastalert:
# level: WARN
# handlers: []
# propagate: true
#
# elasticsearch:
# level: WARN
# handlers: []
# propagate: true
#
# elasticsearch.trace:
# level: WARN
# handlers: []
# propagate: true
#
# '': # root logger
# level: WARN
# handlers:
# - console
# - file
# propagate: false
example_frequency.yaml
# Alert when the rate of events exceeds a threshold
# (Optional)
# Elasticsearch host
# es_host: elasticsearch.example.com
# (Optional)
# Elasticsearch port
# es_port: 14900
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# (Required)
# Rule name, must be unique
name: Example frequency rule
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: frequency
# (Required)
# Index to search, wildcard supported
index: com-*
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 1
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
days: 365
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
filter:
- term:
"log_json.response.statusCode": "404"
# (Required)
# The alert is use when a match is found
alert:
- "email"
# (required, email specific)
# a list of email addresses to send alerts to
email:
- "username#mydomain.com"
realert:
minutes: 0
What is it that i am missing to receive alerts? Neither do i see any error on console.
SMTP configuration in missing, so that is why no alert is being sent.
Please try to include the smtp_host,smtp_port,smtp_ssl and smtp_auth_file in your example_frequency.yaml.
Refer to the document for Email Alert

(Openstack) Unable to upload the image to the Image Service

i'm new to Openstack and trying to build my own Openstack-environment.
After following the "OpenStack Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 20" (on Fedora 21), I faced a problem at uploading cirrOS to the Image-Service.
My Openstack-version, refering to this command: "[root#localhost ~]# keystone-manage --version" should be
2014.2.2
After I try to upload the image I get this output:
ADMIN-OPENRC.SH:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=MYPASS
export OS_AUTH_URL=http://controller:35357/v2.0
[root#localhost ~]# source admin-openrc.sh [root#localhost ~]# glance
--debug image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2
--container-format bare --is-public True --progress curl -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-meta-container_format:
bare' -H 'Accept: /' -H 'X-Auth-Token:
{SHA1}726116102202fa50ff0c064ca3cadb86b65fe997' -H 'x-image-meta-size:
13200896' -H 'Connection: keep-alive' -H 'x-image-meta-is_public:
True' -H 'User-Agent: python-glanceclient' -H 'Content-Type:
application/octet-stream' -H 'x-image-meta-disk_format: qcow2' -H
'x-image-meta-name: cirros-0.3.3-x86_64'
http://controller:9292/v1/images [=============================>]
100% Request returned failure status 401. Invalid OpenStack Identity
credentials.
I have to mention that i can get a token from keystone without problems:
[root#localhost ~]# keystone token-get
+-----------+----------------------------------+ | Property |
Value |
+-----------+----------------------------------+ | expires |
2015-07-03T10:26:38Z | | id |
96299e7c355d43a9b8e5b7f47a4d4cdd | | tenant_id |
425de1784b644473b6f1cffe874992c5 | | user_id |
0a85326e1c744d449327894b6a276b5d |
+-----------+----------------------------------+
Here are my config files:
GLANCE-API.CONF & GLANCE-REGISTRY.CONF
connection=mysql://glance:MYPASS#controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = MYPASS
KEYSTONE.CONF
connection=mysql://keystone:MYPASS#controller/keystone </b>
Here is my api.log:
/var/log/glance/api.log
2015-07-03 11:15:00.763 3447 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:01.266 3447 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:02.269 3447 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:04.273 3447 ERROR keystonemiddleware.auth_token [-] HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:15:04.274 3447 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-07-03 11:15:04.274 3447 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-07-03 11:15:04.327 3447 INFO glance.wsgi.server [-] 192.168.13.92 - - [03/Jul/2015 11:15:04] "POST /v1/images HTTP/1.1" 401 571 3.579172
2015-07-03 11:30:29.083 3446 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:29.587 3446 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:30.591 3446 WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:32.595 3446 ERROR keystonemiddleware.auth_token [-] HTTP connection exception: Unable to establish connection to http://controller:35357/
2015-07-03 11:30:32.595 3446 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-07-03 11:30:32.595 3446 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-07-03 11:30:32.649 3446 INFO glance.wsgi.server [-] 192.168.13.92 - - [03/Jul/2015 11:30:32] "POST /v1/images HTTP/1.1" 401 571 3.581761
Thanks for your effort
Kevin
--------------------------EDIT-----------------------------
Full Glance-Registry.conf:
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose=True
# Show debugging output in logs (sets DEBUG log level output)
#debug=False
# Address to bind the registry server
#bind_host=0.0.0.0
# Port the bind the registry server to
#bind_port=9191
# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
#log_file=/var/log/glance/registry.log
# Backlog requests when creating socket
#backlog=4096
# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle=600
# API to use for accessing data. Default value points to sqlalchemy
# package.
#data_api=glance.db.sqlalchemy.api
# The number of child process workers that will be
# created to service Registry requests. The default will be
# equal to the number of CPUs available. (integer value)
#workers=None
# Enable Registry API versions individually or simultaneously
#enable_v1_registry=True
#enable_v2_registry=True
# Limit the api to return `param_limit_max` items in a call to a container. If
# a larger `limit` query param is provided, it will be reduced to this value.
#api_limit_max=1000
# If a `limit` query param is not provided in an api request, it will
# default to `limit_param_default`
#limit_param_default=25
# Role used to identify an authenticated user as administrator
#admin_role=admin
# Whether to automatically create the database tables.
# Default: False
#db_auto_create=False
# Enable DEBUG log messages from sqlalchemy which prints every database
# query and response.
# Default: False
#sqlalchemy_debug=True
# ================= Syslog Options ============================
# Send logs to syslog (/dev/log) instead of to file specified
# by `log_file`
#use_syslog=False
# Facility to use. If unset defaults to LOG_USER.
#syslog_log_facility=LOG_LOCAL1
# ================= SSL Options ===============================
# Certificate file to use when starting registry server securely
#cert_file=/path/to/certfile
# Private key file to use when starting registry server securely
#key_file=/path/to/keyfile
# CA certificate file to use to verify connecting clients
#ca_file=/path/to/cafile
# ============ Notification System Options =====================
# Driver or drivers to handle sending notifications. Set to
# 'messaging' to send notifications to a message queue.
notification_driver = noop
# Default publisher_id for outgoing notifications.
# default_publisher_id = image.localhost
# Messaging driver used for 'messaging' notifications driver
# rpc_backend = 'rabbit'
# Configuration options if sending notifications via rabbitmq (these are
# the defaults)
#rabbit_host=localhost
#rabbit_port=5672
#rabbit_use_ssl=false
#rabbit_userid=guest
#rabbit_password=guest
#rabbit_virtual_host=/
#rabbit_notification_exchange=glance
#rabbit_notification_topic=notifications
#rabbit_durable_queues=False
# Configuration options if sending notifications via Qpid (these are
# the defaults)
#qpid_notification_exchange=glance
#qpid_notification_topic=notifications
#qpid_hostname=localhost
#qpid_port=5672
#qpid_username=
#qpid_password=
#qpid_sasl_mechanisms=
#qpid_reconnect_timeout=0
#qpid_reconnect_limit=0
#qpid_reconnect_interval_min=0
#qpid_reconnect_interval_max=0
#qpid_reconnect_interval=0
#qpid_heartbeat=5
# Set to 'ssl' to enable SSL
#qpid_protocol=tcp
#qpid_tcp_nodelay=True
# ================= Database Options ==========================
[database]
# The file name to use with SQLite (string value)
#sqlite_db=glance.sqlite
# If True, SQLite uses synchronous mode (boolean value)
#sqlite_synchronous=True
# The backend to use for db (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend=sqlalchemy
# The SQLAlchemy connection string used to connect to the
# database (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
connection = mysql://glance:MYPASS#controller/glance
# The SQL mode to be used for MySQL sessions. This option,
# including the default, overrides any server-set SQL mode. To
# use whatever SQL mode is set by the server configuration,
# set this to no value. Example: mysql_sql_mode= (string
# value)
#mysql_sql_mode=TRADITIONAL
# Timeout before idle sql connections are reaped (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout=3600
# Minimum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size=1
# Maximum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size=<None>
# Maximum db connection retries during startup. (setting -1
# implies an infinite retry count) (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries=10
# Interval between retries of opening a sql connection
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval=10
# If set, use this value for max_overflow with sqlalchemy
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow=<None>
# Verbosity of SQL debugging information. 0=None,
# 100=Everything (integer value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug=0
# Add python stack traces to SQL as comment strings (boolean
# value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace=False
# If set, use this value for pool_timeout with sqlalchemy
# (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout=<None>
# Enable the experimental use of database reconnect on
# connection lost (boolean value)
#use_db_reconnect=False
# seconds between db connection retries (integer value)
#db_retry_interval=1
# Whether to increase interval between db connection retries,
# up to db_max_retry_interval (boolean value)
#db_inc_retry_interval=True
# max seconds between db connection retries, if
# db_inc_retry_interval is enabled (integer value)
#db_max_retry_interval=10
# maximum db connection retries before error is raised.
# (setting -1 implies an infinite retry count) (integer value)
#db_max_retries=20
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = MYPASS
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file=/usr/share/glance/glance-registry-dist-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-registry-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone
[profiler]
# If False fully disable profiling feature.
#enabled=False
# If False doesn't trace SQL requests.
#trace_sqlalchemy=False
Glance-Api.conf:
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file=/usr/share/glance/glance-api-dist-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone
Kevin,
All your configs look fine. Here is what I would suggest you to do
1) Run glance image-list and see if you get anything
2) Did you assign the admin role correctly to glance user "keystone user-role-add --user glance --tenant service --role admin"?
3) Did you run source admin-openrc.sh before running glance create?
HTH
Regards
Ashish

Resources