I am trying to set up an Elastic search connector in graphDB with the following query:
PREFIX :<http://www.ontotext.com/connectors/elasticsearch#>
PREFIX inst:<http://www.ontotext.com/connectors/elasticsearch/instance#>
INSERT DATA {
inst:field :createConnector '''
{
"fields": [
{
"fieldName": "crop",
"propertyChain": [
"https://data.agrimetrics.co.uk/ontologies/general/isLocationFor"
],
"indexed": true,
"stored": true,
"analyzed": true,
"multivalued": true,
"fielddata": false,
"objectFields": []
}
],
"languages": [],
"types": [
"https://data.agrimetrics.co.uk/ontologies/general/Y1x5BN2XVZIvn1"
],
"readonly": false,
"detectFields": false,
"importGraph": false,
"elasticsearchNode": "http://20.67.27.121:9200",
"elasticsearchClusterSniff": true,
"manageIndex": true,
"manageMapping": true,
"bulkUpdateBatchSize": 5000,
"bulkUpdateRequestSize": 5242880
}
''' .
}
I can see that the index is created in elastic:
[2021-07-29T08:45:10,252][INFO ][o.e.c.m.MetadataCreateIndexService] [richardElastic] [field] creating index, cause [api], templates [], shards [1]/[1]
but on the graphDB side I am getting a 500 error "Unable to check if index exists". This is the config of elastic:
{
"name" : "richardElastic",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "skU191RwSnOu7FQiFB7dBg",
"version" : {
"number" : "7.13.4",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "c5f60e894ca0c61cdbae4f5a686d9f08bcefc942",
"build_date" : "2021-07-14T18:33:36.673943207Z",
"build_snapshot" : false,
"lucene_version" : "8.8.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
and this is the YAML:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
# network.host: _eth0_
# network.host: [_local_, _site_, _global_]
# network.host: 0.0.0.0
network.host: _eth0:ipv4_
cluster.initial_master_nodes: node-1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Any help gratefully appreciated.
This appears to be the relevant section of the logs:
[ERROR] 2021-09-02 13:06:15,723 [worker http://10.7.3.7:7080/repositories/farmData#http://10.7.3.5:8080/repositories/farmData | c.o.t.r.a.ClusterOperation] Error while executing transaction
file:/opt/graphdb-data/graphdb-master-home/data/repositories/farmData/txlog/transactions/d84/d84b8d2c-d410-49f5-a06a-a7cb18e390d8.tar
org.eclipse.rdf4j.http.server.HTTPException: null
at com.ontotext.trree.util.Http.call(Http.java:50)
at com.ontotext.trree.replicationcluster.RemoteWorkerRequest.postNext(RemoteWorkerRequest.java:342)
at com.ontotext.trree.replicationcluster.WorkerThread$3.call(WorkerThread.java:541)
at com.ontotext.trree.replicationcluster.WorkerThread$3.call(WorkerThread.java:524)
at com.ontotext.trree.replicationcluster.WorkerThread.execute(WorkerThread.java:966)
at com.ontotext.trree.replicationcluster.WorkerThread.run(WorkerThread.java:322)
If Elasticsearch is in a different network, try disabling cluster sniff:
{
"elasticsearchClusterSniff": false,
}
Related
I'm using Logstash to read log files and send to Elasticsearch. It works fine in a streaming mode, creating everyday a different index and writing logs in real time.
The problem is, yesterday at 3pm I occasionally deleted the index. It restored automatically and continued writing logs. However, I have lost the logs related to 12am - 3pm.
In order to rewrite the log from the beginning, I deleted the sincedb file, also added ignore_older => 0 in the Logstash configuration. After that, I deleted the index again. But it continues streaming, ignoring old data.
My current configuration of logstash:
input {
file {
path => ["/someDirectory/Logs/20221220-00001.log"]
start_position => "beginning"
tags => ["prod"]
ignore_older => 0
sincedb_path => "/dev/null"
type => "cowrie"
}
}
filter {
grok {
match => ["path", "/var/www/cap/cap-server/Logs/%{GREEDYDATA:index_name}" ]
}
}
output {
elasticsearch {
hosts => "IP:9200"
user => "elastic"
password => "xxxxxxxx"
index => "logstash-log-%{index_name}"
}
}
I would appreciate for any help.
I'm also attaching Elasticsearch configuration:
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
discovery.type: single-node
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
#action.destructive_requires_name: true
Note, that after all configuration changes, logstash and elasticsearch have been restared.
Hi, my kibana dont want run for this error but for me conf is ok, a dont see there nothing without ""
KIbana FATAL TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL.
mayby you have solusions ?
thanks for help ;P
## Basic
server.port: 5601
server.host: "0.0.0.0"
server.ssl.enabled: true
server.ssl.certificate: "/etc/kibana/ssl/kibana.crt"
server.ssl.key: "/etc/kibana/ssl/kibana.key"
server.ssl.supportedProtocols: ["TLSv1.2", "TLSv1.3"]
#server.basePath: "192.168.3.129"
#server.rewriteBasePath: false
server.maxPayloadBytes: 2097152
## Logger settings
#logging.silent: false
#logging.quiet: true
#logging.verbose: true
#logging.dest: "/path/to/logfile"
#elasticsearch.logQueries: true
## The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
elasticsearch.hosts: ["http://127.0.0.1:8000"]
elasticsearch.username: "logserver"
elasticsearch.password: "logserver"
elasticsearch.requestTimeout: 40000
elasticsearch.shardTimeout: 30000
#elasticsearch.preserveHost: true
#kibana.index: ".kibana"
#login.cookieKeepAlive: true
#login.isSameSite: "Strict"
#login.isSecure: true
## Elasticsearch trafic encryption
#elasticsearch.ssl.verificationMode: full
#elasticsearch.ssl.certificate: "/etc/elasticsearch/ssl/hostname.crt"
#elasticsearch.ssl.key: "/etc/elasticsearch/ssl/hostname.key"
#elasticsearch.ssl.certificateAuthorities: "/etc/elasticsearch/ssl/certificate_authority.crt"
#elasticsearch.requestHeadersWhitelist: [ authorization ]
#elasticsearch.customHeaders: {}
## Elastfilter
#elastfilter.refreshtimeout: 1000000
#elastfilter.proxytimeout: 6000000
#elastfilter.bodySizeLimit: 300
#elastfilter.url: "http://127.0.0.1:9200"
not some code
It looks like your post is mostly code; please add some more details.##
##
## Superuser & supergroup
#elastfilter.admin: "logserver"
#elastfilter.role: "admin"
## Elastfilter plugin
#elastfilter.username: "logserver"
#elastfilter.password: "logserver"
## Scheduler plugin
#elastscheduler.commandpath: "/opt/ai/bin"
#elastscheduler.removefromcommand: ['\.\.', '|', '\$']
#elastscheduler.username: "scheduler"
#elastscheduler.password: "scheduler"
## Cerebro plugin
#cerebro.user: "logserver"
#cerebro.password: "logserver"
#cerebro.port: 5602
#cerebro.clustername: "logserver"
## AD/RADIUS/LDAP plugin
#login.sso_enabled: false
#login.radius_enabled: false
## Agents plugin
#agents.repository: "/path/to/repo/dir"
## Archive, default /usr/share/kibana/plugins/archive/archives/
#archive.archivefolderpath: "/path/to/archive/files"
#archive.compressionOptions: ["-T0", "-22", "--ultra", "--zstd=wlog=23,clog=23,hlog=22,slog=6,mml=3,tlen=48,strat=6"]
## Network Probe plugin
network-probe.enabled: false
#network-probe.certificateAuthorities: ["/example/rootCA1.crt", "/example/rootCA2.crt"]
#network-probe.clientCertificate: "/example/cert.crt"
#network-probe.clientCertificateKey: "/example/cert_key.key"
#network-probe.clientCertificateKeyPassword: "password"
## Alerts plugin
#alerts.enabled: true
## DevTools plugin
console.enabled: true
## Timelion plugin
#timelion.enabled: true
## Wazuh (changing this will rebuild kibana bundles and takes up to 4 min)
#wazuh.enabled: true
## Wiki.js plugin
#login.wiki_port: 5603
#login.wiki_protocol: "https"
## Home Page settings
kibana.defaultAppId: "discover"
## Telemetry restrictions
#telemetry.enabled: false
#telemetry.optIn: false
#telemetry.allowChangingOptInStatus: false
enter image description here
not some code
It looks like your post is mostly code; please add some more details.##
I have done setup on windows 10. Getting below output when executing elastalert-test-rule for my rule.
elastalert-test-rule example_rules\example_frequency.yaml --config config.yaml
Would have written the following documents to writeback index (default is elastalert_status):
elastalert_status - {'rule_name': 'Example frequency rule', 'endtime': datetime.datetime(2020, 4, 19, 18, 49, 10, 397745, tzinfo=tzutc()), 'starttime': datetime.datetime(2019, 4, 17, 3, 13, 10, 397745, tzinfo=tzutc()), 'matches': 4, 'hits': 4, '#timestamp': datetime.datetime(2020, 4, 19, 18, 55, 56, 314841, tzinfo=tzutc()), 'time_taken': 405.48910188674927}
However, no alert is triggered.
Please find below contents of config.yaml and example_frequency.yaml
config.yaml
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: example_rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
seconds: 5
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: 127.0.0.1
# The Elasticsearch port
es_port: 9200
# The AWS region to use. Set this when using AWS-managed elasticsearch
#aws_region: us-east-1
# The AWS profile to use. Use this if you are using an aws-cli profile.
# See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
# for details
#profile: test
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
#use_ssl: True
# Verify TLS certificates
#verify_certs: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
#es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# Use SSL authentication with client certificates client_cert must be
# a pem file containing both cert and key for client
#verify_certs: True
#ca_certs: /path/to/cacert.pem
#client_cert: /path/to/client_cert.pem
#client_key: /path/to/client_key.key
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
# Custom logging configuration
# If you want to setup your own logging configuration to log into
# files as well or to Logstash and/or modify log levels, use
# the configuration below and adjust to your needs.
# Note: if you run ElastAlert with --verbose/--debug, the log level of
# the "elastalert" logger is changed to INFO, if not already INFO/DEBUG.
#logging:
# version: 1
# incremental: false
# disable_existing_loggers: false
# formatters:
# logline:
# format: '%(asctime)s %(levelname)+8s %(name)+20s %(message)s'
#
# handlers:
# console:
# class: logging.StreamHandler
# formatter: logline
# level: DEBUG
# stream: ext://sys.stderr
#
# file:
# class : logging.FileHandler
# formatter: logline
# level: DEBUG
# filename: elastalert.log
#
# loggers:
# elastalert:
# level: WARN
# handlers: []
# propagate: true
#
# elasticsearch:
# level: WARN
# handlers: []
# propagate: true
#
# elasticsearch.trace:
# level: WARN
# handlers: []
# propagate: true
#
# '': # root logger
# level: WARN
# handlers:
# - console
# - file
# propagate: false
example_frequency.yaml
# Alert when the rate of events exceeds a threshold
# (Optional)
# Elasticsearch host
# es_host: elasticsearch.example.com
# (Optional)
# Elasticsearch port
# es_port: 14900
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# (Required)
# Rule name, must be unique
name: Example frequency rule
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: frequency
# (Required)
# Index to search, wildcard supported
index: com-*
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 1
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
days: 365
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
filter:
- term:
"log_json.response.statusCode": "404"
# (Required)
# The alert is use when a match is found
alert:
- "email"
# (required, email specific)
# a list of email addresses to send alerts to
email:
- "username#mydomain.com"
realert:
minutes: 0
What is it that i am missing to receive alerts? Neither do i see any error on console.
SMTP configuration in missing, so that is why no alert is being sent.
Please try to include the smtp_host,smtp_port,smtp_ssl and smtp_auth_file in your example_frequency.yaml.
Refer to the document for Email Alert
I 'am using Elasticsearch 5.1.1 , logstash 5.1.1 ,I imported 3 millions rows from sqlserver into elastic via logstash in 2 hours
I have 1 single windows machine with 4GB Ram , core I 3 ): is there any additional configurations should I add to speed up the importing ?
I tried to change logstash.yml settings via https://www.elastic.co/guide/en/logstash/current/logstash-settings-file.html
but it doesn't affect
Logstash Configurations
input {
jdbc {
jdbc_driver_library => "D:\Usefull_Jars\sqljdbc4-4.0.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://192.168.5.14:1433;databaseName=DataSource;integratedSecurity=false;user=****;password=****;"
jdbc_user => "****"
jdbc_password => "****"
statement => "SELECT * FROM RawData"
jdbc_fetch_size => 1000
}
}
output {
elasticsearch {
hosts => "localhost"
index => "testdata"
document_type => "testfeed"
document_id => "%{id}"
flush_size => 512
}
}
logstash.yml
pipeline:
batch:
size: 125
delay: 2
#
# Or as flat keys:
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
# ------------ Pipeline Settings --------------
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
# This defaults to the number of the host's CPU cores.
pipeline.workers: 5
# How many workers should be used per output plugin instance
pipeline.output.workers: 5
# How many events to retrieve from inputs before sending to filters+workers
pipeline.batch.size: 125
# How long to wait before dispatching an undersized batch to filters+workers
# Value is in milliseconds.
# pipeline.batch.delay: 5
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 250mb
#
# queue.page_capacity: 250mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
Thanks in advance ....
I have installed Elastisearch 2.1.0 and kibana 4.3.0 in single machine.
Kibana.yml Configurations :
# Kibana is served by a back end server. This controls which port to use.
server.port: 5601
# The host to bind the server to.
server.host: "IP"
# A value to use as a XSRF token. This token is sent back to the server on each request
# and required if you want to execute requests from other clients (like curl).
# server.xsrf.token: ""
# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""
# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://IP:9200/"
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"
# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key
# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem
# Set to false to have a complete disregard for the validity of the SSL
# certificate.
elasticsearch.ssl.verify: true
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000
# Set the path to where you would like the process id file to be created.
pid.file: /var/run/kibana.pid
# If you would like to send the log output to a file you can set the path below.
logging.dest: /var/log/kibana/kibana.log
# Set this to true to suppress all logging output.
# logging.silent: false
# Set this to true to suppress all logging output except for error messages.
# logging.quiet: true
# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: true
While I am doing curl -IP:5601 , I am getting this output:
**HTTP/1.1 200 OK
x-app-name: kibana
x-app-version: 4.3.0
cache-control: no-cache
content-type: text/html
content-length: 217
accept-ranges: bytes
Date: Wed, 20 Jan 2016 15:28:35 GMT
Connection: keep-alive
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
</script>
Elasticsearch and kibana both are up and running still I am not able to access Kibana GUI from the browser. It is not displaying the page.
I checked the configurations of elasticsearch.yml too.The host and IP is correct there. Curl command is giving this output for elasticsearch [Command :curl http://IP:9200/]
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
Could anybody tell what could be the issue.
Did you install elasticsearch and kibana on your local machine, I mean your laptop or computer that you are workng on? Or is it running on a separate server?
If you are running it on the same machine that you are accessing the browser, then you could just access it as localhost:port
As your error includes the status
Elasticsearch is still initializing the kibana index, I would recommend you to try the steps mentioned in this page:-
Elasticsearch is still initializing the kibana index