trying to set Elasticsearch to bind to another address than local, I'm having lot of troubles..
Elasticsearch-oss 7.7 Opendistro.
elasticsearch.yml:
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
cannot set to any syntax... tryed an array
network.host: [ "127.0.0.1", "100.0.0.1" ]
...and tryed different variations, like special values, etc.
network.host: 0.0.0.0
also not working...
network:
host: _global_
also not working...
(using global address for testing)
network:
host: _local_
working
network:
host: _local_ , _interface-name_
...not working.
Finally I found a way to bind to another address. And I can get a request externally...but now the localhost is failing!
network.host: localhost
http.host: 100.0.0.1
From the same server:
curl -XGET https://localhost:9200 -u admin:admin --insecure
curl: (7) Failed to connect to localhost port 9200: Connection refused
From the client:
curl -XGET https://100.0.0.1:9200 -u admin:admin --insecure
{
"name" : "somename",
"cluster_name" : "someclustername",
"cluster_uuid" : "someclusteruuid",
"version" : {
"number" : "7.7.0",
"build_flavor" : "oss",
"build_type" : "deb",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
waiting your approach to this problem..
Thanks
[edit]
Now I found a certificate error log....I don't know if it is related.
Using default security settings for Opendistro plugin
at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
at sun.security.ssl.TransportContext.fatal(TransportContext.java:311) ~[?:?]
at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:291) ~[?:?]
at sun.security.ssl.TransportContext.dispatch(TransportContext.java:184) ~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:167) ~[?:?]
at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:729) ~[?:?]
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:684) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:499) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:475) ~[?:?]
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:634) ~[?:?]
Here the full elasticsearch.yml
The security cert options are default by Opendistro
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: somename
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: localhost
http.host: 100.0.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
######## End OpenDistro for Elasticsearch Security Demo Configuration ########
What does "client" mean in this context?
A client-node that is shipping logs to the server-node. In this case for testing purposses.
I will configure the certs properly and the discovery.type to see if that can be the fix
Thanks
Related
I am trying to define a list of seed host providers in elasticsearch.yml using this, it shows some errors which I have shared below.
elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9500
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
192.168.1.10:9300
192.168.1.11
seeds.mydomain.com
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Error
Exception in thread "main" 2020-05-01 15:10:08,817 main ERROR No Log4j 2 configuration file found. Using default configuration (logging only errors to the console), or user programmatically provided configurations. Set system property 'log4j2.debug' to show Log4j 2 internal initialization logging. See https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions on how to configure Log4j 2
SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: MarkedYAMLException[while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
at [Source: sun.nio.ch.ChannelInputStream#565f390; line: 68, column: 21]]; nested: ScannerException[while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
];
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1097)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1070)
at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:83)
at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:95)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
at [Source: sun.nio.ch.ChannelInputStream#565f390; line: 68, column: 21]
at com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException.from(MarkedYAMLException.java:27)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:343)
at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:52)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:645)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:620)
at org.elasticsearch.common.settings.Settings.access$400(Settings.java:82)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1093)
... 8 more
Caused by: while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingValue.produce(ParserImpl.java:585)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
You forgot to remove the hashtag # sign in front of the setting name discovery.seed_hosts. The # is used for comments so the settings parser can not find the corresponding setting-key of the values and therefore fails.
Furthermore, you need to put dashes before the values since the setting expects an array of values.
#discovery.seed_hosts: ["host1", "host2"]
192.168.1.10:9300
192.168.1.11
seeds.mydomain.com
must be changed to
discovery.seed_hosts:
- 192.168.1.10:9300
- 192.168.1.11
- seeds.mydomain.com
I have set up elasticsearch with password protected, and i am successfully able to work with elastic search by entering username=elastic and password=mypassword
but now I am trying to import mysql data into elasticsearch using logstash, when i run logstash using below command it gives error.
am i missing something?
logstash -f mysql.conf
logstash-plain.log
[2019-06-14T18:12:34,410][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-14T18:12:34,424][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.0"}
[2019-06-14T18:12:35,400][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {\r\n elasticsearch {\r\n\thosts => \"http://10.42.35.14:9200/\"\r\n user => elastic\r\n password => pharma", :backtrace=>["D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-06-14T18:12:35,758][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-14T18:12:40,664][INFO ][logstash.runner ] Logstash shut down.
mysql.conf
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://52.213.22.96:3306/prbi"
jdbc_user => "myuser"
jdbc_password => "mypassword"
jdbc_driver_library => "mysql-connector-java-6.0.5.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * from tmp_j_summaryreport"
}
}
output {
elasticsearch {
hosts => "http://10.42.35.14:9200/"
user => elastic
password => myelasticpassword
index => "testing123"
}
stdout { codec => json_lines }
}
logstash.yml
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
#xpack.management.enabled: true
xpack.management.elasticsearch.hosts: "http://10.42.35.14:9200/"
#xpack.management.elasticsearch.username: logstash_system
xpack.management.elasticsearch.password: myelasticpassword
This message on the logstash log indicates that there is something wrong with your config file:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError"
The rest of message says that the problem is in your output block
:message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {
Double check your output configuration, it needs to be something like this:
output {
elasticsearch {
hosts => ["10.42.35.14:9200"]
user => "elastic"
password => "myelasticpassword"
index => "testing123"
}
stdout { codec => "json_lines" }
}
This is my elasticsearch.yml:
cluster.name: cluster
node.name: esn1
path.conf: "/etc/elasticsearch"
path.data: "/var/lib/elasticsearch"
path.logs: "/var/log/elasticsearch"
network.host: 0.0.0.0
http.port: 9201
bootstrap.memory_lock: false
discovery.zen.minimum_master_nodes: 1
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false
I've also installed x-pack:
# sudo /usr/share/elasticsearch/bin/elasticsearch-plugin list
repository-s3
x-pack
Nevertheless:
curl -XPUT 'http://localhost:9200/_xpack/security/user/elastic/_password' -d '
> {
> "password": "L5ngDgtl00?"
> }
> '
No handler found for uri [/_xpack/security/user/elastic/_password] and method [PUT][
Any ideas?
You're almost there but i guess you're making some mistake in the command for curl. -u elastic option is missing.
See here: https://www.elastic.co/guide/en/x-pack/current/security-getting-started.html
Also, try to reinstall x-pack once by following step 1 in the above link.
I am trying to start my elasticsearch server by running bin/elasticsearch from my ES directory but I keep getting a bindtransport exception. What should I do?
idea!
[2016-08-11 04:57:45,143][INFO ][node ] [anish-elk1] version[2.3.3], pid[30342], build[218bdf1/2016-05-17T15:40:04Z]
[2016-08-11 04:57:45,143][INFO ][node ] [anish-elk1] initializing ...
[2016-08-11 04:57:45,683][INFO ][plugins ] [anish-elk1] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-08-11 04:57:45,707][INFO ][env ] [anish-elk1] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [16.1gb], net total_space [49gb], spins? [no], types [ext4]
[2016-08-11 04:57:45,707][INFO ][env ] [anish-elk1] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-08-11 04:57:47,647][INFO ][node ] [anish-elk1] initialized
[2016-08-11 04:57:47,648][INFO ][node ] [anish-elk1] starting ...
Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]]; nested: ChannelException[Failed to bind to: /192.168.0.1:9400]; nested: BindException[Cannot assign requested address];
Likely root cause: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# name that es uses to find other clusters to join
# when you turn on a node, it will find other nodes on the network to talk to
# if found, it will cluster. name determines if node will join or not.
cluster.name: elk1
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
# everytime you turn on a node, it will choose a marvel comic character
# node.name is just a name
node.name: anish-elk1
#
# Add custom attributes to the node:
#
# node.rack: r1
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
# allows jvm to lock memory on startup to avoid swapping
bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# setting to control network traffic. only allows traffic from :<> so that
# external processes cannot access elasticsearch server
# network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 \
+ 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
...(everthing commenteed)
After configuring and installing Elasticsearch I got this error while checking the logs.
[2016-01-25 15:37:33,223][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-01-25 15:37:33,223][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-01-25 15:37:33,224][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-01-25 15:37:33,224][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2016-01-25 15:37:33,224][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-01-25 15:37:33,428][INFO ][node ] [node-1] version[2.1.0], pid[13298], build[72cd1f1/2015-11-18T22:40:03Z]
[2016-01-25 15:37:33,428][INFO ][node ] [node-1] initializing ...
[2016-01-25 15:37:33,508][INFO ][plugins ] [node-1] loaded [], sites []
[2016-01-25 15:37:33,528][INFO ][env ] [node-1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [43.8gb], net total_space [49.9gb], spins? [unknown], types [rootfs]
[2016-01-25 15:37:35,022][INFO ][node ] [node-1] initialized
[2016-01-25 15:37:35,022][INFO ][node ] [node-1] starting ...
[2016-01-25 15:37:35,088][INFO ][transport ] [node-1] publish_address {10.155.153.74:9300}, bound_addresses {10.155.153.74:9300}
[2016-01-25 15:37:35,097][INFO ][discovery ] [node-1] Elasticsearch/M0pCcU6UQ1ShHxlOZ4U22w
[2016-01-25 15:37:38,157][INFO ][cluster.service ] [node-1] new_master {node-1}{M0pCcU6UQ1ShHxlOZ4U22w}{10.155.153.74}{10.155.153.74:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-25 15:37:38,195][INFO ][http ] [node-1] publish_address {10.155.153.74:9200}, bound_addresses {10.155.153.74:9200}
[2016-01-25 15:37:38,196][INFO ][node ] [node-1] started
[2016-01-25 15:37:38,250][INFO ][gateway ] [node-1] recovered [0] indices into cluster_state
[2016-01-25 15:37:45,458][INFO ][cluster.metadata ] [node-1] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [config
I checked for bootstrap.mlockall: true too .
Elasticsearch.yml file
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: Elasticsearch
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
node.master: true
node.data: true
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.155.153.74
network.publish_host: 10.155.153.74
network.bind_host: 10.155.153.74
#
# Set a custom port for HTTP:
#
http.port: 9200
discovery.zen.ping.multicast.enabled: false
# http.cors.enabled: true
# http.cors.allow-origin: http://tvmatp326579d.ad.infosys.com:5601/
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
Can anybody tell what could be the issue? why ES is not able to lock JVM memory?
update:
Set the environment variable
ES_HEAP_SIZE
Ref: Heap sizing
question 2: (in comments)
make sure that port 9200 and 9300 are not blocked by firewall