BindTransportException when trying to start elasticsearch - elasticsearch

I am trying to start my elasticsearch server by running bin/elasticsearch from my ES directory but I keep getting a bindtransport exception. What should I do?
idea!
[2016-08-11 04:57:45,143][INFO ][node ] [anish-elk1] version[2.3.3], pid[30342], build[218bdf1/2016-05-17T15:40:04Z]
[2016-08-11 04:57:45,143][INFO ][node ] [anish-elk1] initializing ...
[2016-08-11 04:57:45,683][INFO ][plugins ] [anish-elk1] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-08-11 04:57:45,707][INFO ][env ] [anish-elk1] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [16.1gb], net total_space [49gb], spins? [no], types [ext4]
[2016-08-11 04:57:45,707][INFO ][env ] [anish-elk1] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-08-11 04:57:47,647][INFO ][node ] [anish-elk1] initialized
[2016-08-11 04:57:47,648][INFO ][node ] [anish-elk1] starting ...
Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]]; nested: ChannelException[Failed to bind to: /192.168.0.1:9400]; nested: BindException[Cannot assign requested address];
Likely root cause: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# name that es uses to find other clusters to join
# when you turn on a node, it will find other nodes on the network to talk to
# if found, it will cluster. name determines if node will join or not.
cluster.name: elk1
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
# everytime you turn on a node, it will choose a marvel comic character
# node.name is just a name
node.name: anish-elk1
#
# Add custom attributes to the node:
#
# node.rack: r1
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
# allows jvm to lock memory on startup to avoid swapping
bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# setting to control network traffic. only allows traffic from :<> so that
# external processes cannot access elasticsearch server
# network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 \
+ 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
...(everthing commenteed)

Related

Elasticsearch-oss cannot bind to local and external addresses at same time

trying to set Elasticsearch to bind to another address than local, I'm having lot of troubles..
Elasticsearch-oss 7.7 Opendistro.
elasticsearch.yml:
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
cannot set to any syntax... tryed an array
network.host: [ "127.0.0.1", "100.0.0.1" ]
...and tryed different variations, like special values, etc.
network.host: 0.0.0.0
also not working...
network:
host: _global_
also not working...
(using global address for testing)
network:
host: _local_
working
network:
host: _local_ , _interface-name_
...not working.
Finally I found a way to bind to another address. And I can get a request externally...but now the localhost is failing!
network.host: localhost
http.host: 100.0.0.1
From the same server:
curl -XGET https://localhost:9200 -u admin:admin --insecure
curl: (7) Failed to connect to localhost port 9200: Connection refused
From the client:
curl -XGET https://100.0.0.1:9200 -u admin:admin --insecure
{
"name" : "somename",
"cluster_name" : "someclustername",
"cluster_uuid" : "someclusteruuid",
"version" : {
"number" : "7.7.0",
"build_flavor" : "oss",
"build_type" : "deb",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
waiting your approach to this problem..
Thanks
[edit]
Now I found a certificate error log....I don't know if it is related.
Using default security settings for Opendistro plugin
at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
at sun.security.ssl.TransportContext.fatal(TransportContext.java:311) ~[?:?]
at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:291) ~[?:?]
at sun.security.ssl.TransportContext.dispatch(TransportContext.java:184) ~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:167) ~[?:?]
at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:729) ~[?:?]
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:684) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:499) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:475) ~[?:?]
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:634) ~[?:?]
Here the full elasticsearch.yml
The security cert options are default by Opendistro
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: somename
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: localhost
http.host: 100.0.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
######## End OpenDistro for Elasticsearch Security Demo Configuration ########
What does "client" mean in this context?
A client-node that is shipping logs to the server-node. In this case for testing purposses.
I will configure the certs properly and the discovery.type to see if that can be the fix
Thanks

Unable to set list of hosts using the discovery.seed_hosts static setting

I am trying to define a list of seed host providers in elasticsearch.yml using this, it shows some errors which I have shared below.
elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9500
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
192.168.1.10:9300
192.168.1.11
seeds.mydomain.com
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Error
Exception in thread "main" 2020-05-01 15:10:08,817 main ERROR No Log4j 2 configuration file found. Using default configuration (logging only errors to the console), or user programmatically provided configurations. Set system property 'log4j2.debug' to show Log4j 2 internal initialization logging. See https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions on how to configure Log4j 2
SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: MarkedYAMLException[while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
at [Source: sun.nio.ch.ChannelInputStream#565f390; line: 68, column: 21]]; nested: ScannerException[while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
];
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1097)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1070)
at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:83)
at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:95)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
at [Source: sun.nio.ch.ChannelInputStream#565f390; line: 68, column: 21]
at com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException.from(MarkedYAMLException.java:27)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:343)
at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:52)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:645)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:620)
at org.elasticsearch.common.settings.Settings.access$400(Settings.java:82)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1093)
... 8 more
Caused by: while scanning a simple key
in 'reader', line 69, column 1:
192.168.1.10:9300
^
could not find expected ':'
in 'reader', line 70, column 1:
192.168.1.11
^
at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingValue.produce(ParserImpl.java:585)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
You forgot to remove the hashtag # sign in front of the setting name discovery.seed_hosts. The # is used for comments so the settings parser can not find the corresponding setting-key of the values and therefore fails.
Furthermore, you need to put dashes before the values since the setting expects an array of values.
#discovery.seed_hosts: ["host1", "host2"]
192.168.1.10:9300
192.168.1.11
seeds.mydomain.com
must be changed to
discovery.seed_hosts:
- 192.168.1.10:9300
- 192.168.1.11
- seeds.mydomain.com

Ignoring the 'pipelines.yml' file because modules or command line options are specified

I have set up elasticsearch with password protected, and i am successfully able to work with elastic search by entering username=elastic and password=mypassword
but now I am trying to import mysql data into elasticsearch using logstash, when i run logstash using below command it gives error.
am i missing something?
logstash -f mysql.conf
logstash-plain.log
[2019-06-14T18:12:34,410][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-14T18:12:34,424][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.0"}
[2019-06-14T18:12:35,400][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {\r\n elasticsearch {\r\n\thosts => \"http://10.42.35.14:9200/\"\r\n user => elastic\r\n password => pharma", :backtrace=>["D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-06-14T18:12:35,758][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-14T18:12:40,664][INFO ][logstash.runner ] Logstash shut down.
mysql.conf
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://52.213.22.96:3306/prbi"
jdbc_user => "myuser"
jdbc_password => "mypassword"
jdbc_driver_library => "mysql-connector-java-6.0.5.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * from tmp_j_summaryreport"
}
}
output {
elasticsearch {
hosts => "http://10.42.35.14:9200/"
user => elastic
password => myelasticpassword
index => "testing123"
}
stdout { codec => json_lines }
}
logstash.yml
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
#xpack.management.enabled: true
xpack.management.elasticsearch.hosts: "http://10.42.35.14:9200/"
#xpack.management.elasticsearch.username: logstash_system
xpack.management.elasticsearch.password: myelasticpassword
This message on the logstash log indicates that there is something wrong with your config file:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError"
The rest of message says that the problem is in your output block
:message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {
Double check your output configuration, it needs to be something like this:
output {
elasticsearch {
hosts => ["10.42.35.14:9200"]
user => "elastic"
password => "myelasticpassword"
index => "testing123"
}
stdout { codec => "json_lines" }
}

ORA-12505, TNS: listener does not currently know of SID given in connect descriptor Vendor code 12505

i have just installed my Oracle Sql Developer 11g , and when I'm starting to make a new database connection and it failed so im configuring the ora files(tnsnames.ora and LISTENER.ora) but their contents are not same as the ora files that shown at the tutorials when I'm researching. So is this the right content of a tnsnames.ora and LISTENER.ora file? Thank you for answering..
this is my tnsnames.ora
# This file contains the syntax information for
# the entries to be put in any tnsnames.ora file
# The entries in this file are need based.
# There are no defaults for entries in this file
# that Sqlnet/Net3 use that need to be overridden
#
# Typically you could have two tnsnames.ora files
# in the system, one that is set for the entire system
# and is called the system tnsnames.ora file, and a
# second file that is used by each user locally so that
# he can override the definitions dictated by the system
# tnsnames.ora file.
# The entries in tnsnames.ora are an alternative to using
# the names server with the onames adapter.
# They are a collection of aliases for the addresses that
# the listener(s) is(are) listening for a database or
# several databases.
# The following is the general syntax for any entry in
# a tnsnames.ora file. There could be several such entries
# tailored to the user's needs.
<alias>= [ (DESCRIPTION_LIST = # Optional depending on whether u have
# one or more descriptions
# If there is just one description, unnecessary ]
(DESCRIPTION=
[ (SDU=2048) ] # Optional, defaults to 2048
# Can take values between 512 and 32K
[ (ADDRESS_LIST= # Optional depending on whether u have
# one or more addresses
# If there is just one address, unnecessary ]
(ADDRESS=
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=tcp)
(HOST=<hostname>)
(PORT=<portnumber (1521 is a standard port used)>)
)
[ (ADDRESS=
(PROTOCOL=ipc)
(KEY=<ipckey (PNPKEY is a standard key used)>)
)
]
[ (ADDRESS=
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=decnet)
(NODE=<nodename>)
(OBJECT=<objectname>)
)
]
... # More addresses
[ ) ] # Optional depending on whether ADDRESS_LIST is used or not
[ (CONNECT_DATA=
(SID=orcl)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
[ (SOURCE_ROUTE=yes) ]
)
(DESCRIPTION=
[ (SDU=2048) ] # Optional, defaults to 2048
# Can take values between 512 and 32K
[ (ADDRESS_LIST= ] # Optional depending on whether u have more
# than one address or not
# If there is just one address, unnecessary
(ADDRESS
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=tcp)
(HOST=<hostname>)
(PORT=<portnumber (1521 is a standard port used)>)
)
[ (ADDRESS=
(PROTOCOL=ipc)
(KEY=<ipckey (PNPKEY is a standard key used)>)
)
]
... # More addresses
[ ) ] # Optional depending on whether ADDRESS_LIST
# is being used
[ (CONNECT_DATA=
(SID=orcl)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
[ (SOURCE_ROUTE=yes) ]
)
[ (CONNECT_DATA=
(SID=orcl)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
... # More descriptions
[ ) ] # Optional depending on whether DESCRIPTION_LIST is used or not
AND
this is my tnsnames.ora
# copyright (c) 1997 by the Oracle Corporation
#
# NAME
# listener.ora
# FUNCTION
# Network Listener startup parameter file example
# NOTES
# This file contains all the parameters for listener.ora,
# and could be used to configure the listener by uncommenting
# and changing values. Multiple listeners can be configured
# in one listener.ora, so listener.ora parameters take the form
# of SID_LIST_<lsnr>, where <lsnr> is the name of the listener
# this parameter refers to. All parameters and values are
# case-insensitive.
# <lsnr>
# This parameter specifies both the name of the listener, and
# it listening address(es). Other parameters for this listener
# us this name in place of <lsnr>. When not specified,
# the name for <lsnr> defaults to "LISTENER", with the default
# address value as shown below.
#
# LISTENER =
# (ADDRESS_LIST=
# (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))
# (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))
# SID_LIST_<lsnr>
# List of services the listener knows about and can connect
# clients to. There is no default. See the Net8 Administrator's
# Guide for more information.
#
# SID_LIST_LISTENER=
# (SID_LIST=
# (SID_DESC=
# #BEQUEATH CONFIG
# (GLOBAL_DBNAME=salesdb.mycompany)
# (SID_NAME=sid1)
# (ORACLE_HOME=/private/app/oracle/product/8.0.3)
# #PRESPAWN CONFIG
# (PRESPAWN_MAX=20)
# (PRESPAWN_LIST=
# (PRESPAWN_DESC=(PROTOCOL=tcp)(POOL_SIZE=2)(TIMEOUT=1))
# )
# )
# )
# PASSWORDS_<lsnr>
# Specifies a password to authenticate stopping the listener.
# Both encrypted and plain-text values can be set. Encrypted passwords
# can be set and stored using lsnrctl.
# LSNRCTL> change_password
# Will prompt for old and new passwords, and use encryption both
# to match the old password and to set the new one.
# LSNRCTL> set password
# Will prompt for the new password, for authentication with
# the listener. The password must be set before running the next
# command.
# LSNRCTL> save_config
# Will save the changed password to listener.ora. These last two
# steps are not necessary if SAVE_CONFIG_ON_STOP_<lsnr> is ON.
# See below.
#
# Default: NONE
#
# PASSWORDS_LISTENER = 20A22647832FB454 # "foobar"
# SAVE_CONFIG_ON_STOP_<lsnr>
# Tells the listener to save configuration changes to listener.ora when
# it shuts down. Changed parameter values will be written to the file,
# while preserving formatting and comments.
# Default: OFF
# Values: ON/OFF
#
# SAVE_CONFIG_ON_STOP_LISTENER = ON
# USE_PLUG_AND_PLAY_<lsnr>
# Tells the listener to contact an Onames server and register itself
# and its services with Onames.
# Values: ON/OFF
# Default: OFF
#
# USE_PLUG_AND_PLAY_LISTENER = ON
# LOG_FILE_<lsnr>
# Sets the name of the listener's log file. The .log extension
# is added automatically.
# Default=<lsnr>
#
# LOG_FILE_LISTENER = lsnr
# LOG_DIRECTORY_<lsnr>
# Sets the directory for the listener's log file.
# Default: <oracle_home>/network/log
#
# LOG_DIRECTORY_LISTENER = /private/app/oracle/product/8.0.3/network/log
# TRACE_LEVEL_<lsnr>
# Specifies desired tracing level.
# Default: OFF
# Values: OFF/USER/ADMIN/SUPPORT/0-16
#
# TRACE_LEVEL_LISTENER = SUPPORT
# TRACE_FILE_<lsnr>
# Sets the name of the listener's trace file. The .trc extension
# is added automatically.
# Default: <lsnr>
#
# TRACE_FILE_LISTENER = lsnr
# TRACE_DIRECTORY_<lsnr>
# Sets the directory for the listener's trace file.
# Default: <oracle_home>/network/trace
#
# TRACE_DIRECTORY_LISTENER=/private/app/oracle/product/8.0.3/network/trace
# CONNECT_TIMEOUT_<lsnr>
# Sets the number of seconds that the listener waits to get a
# valid database query after it has been started.
# Default: 10
#
# CONNECT_TIMEOUT_LISTENER=10
Ping the database ip if it is listening then confirm your system hostname , that it must be same as in your tnsname.ora file.
After that conenction depends on the following highlighted areas that must be correctly entered.enter image description here

I have installed Elasticsearch 2.1.0

After configuring and installing Elasticsearch I got this error while checking the logs.
[2016-01-25 15:37:33,223][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-01-25 15:37:33,223][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-01-25 15:37:33,224][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-01-25 15:37:33,224][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2016-01-25 15:37:33,224][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-01-25 15:37:33,428][INFO ][node ] [node-1] version[2.1.0], pid[13298], build[72cd1f1/2015-11-18T22:40:03Z]
[2016-01-25 15:37:33,428][INFO ][node ] [node-1] initializing ...
[2016-01-25 15:37:33,508][INFO ][plugins ] [node-1] loaded [], sites []
[2016-01-25 15:37:33,528][INFO ][env ] [node-1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [43.8gb], net total_space [49.9gb], spins? [unknown], types [rootfs]
[2016-01-25 15:37:35,022][INFO ][node ] [node-1] initialized
[2016-01-25 15:37:35,022][INFO ][node ] [node-1] starting ...
[2016-01-25 15:37:35,088][INFO ][transport ] [node-1] publish_address {10.155.153.74:9300}, bound_addresses {10.155.153.74:9300}
[2016-01-25 15:37:35,097][INFO ][discovery ] [node-1] Elasticsearch/M0pCcU6UQ1ShHxlOZ4U22w
[2016-01-25 15:37:38,157][INFO ][cluster.service ] [node-1] new_master {node-1}{M0pCcU6UQ1ShHxlOZ4U22w}{10.155.153.74}{10.155.153.74:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-25 15:37:38,195][INFO ][http ] [node-1] publish_address {10.155.153.74:9200}, bound_addresses {10.155.153.74:9200}
[2016-01-25 15:37:38,196][INFO ][node ] [node-1] started
[2016-01-25 15:37:38,250][INFO ][gateway ] [node-1] recovered [0] indices into cluster_state
[2016-01-25 15:37:45,458][INFO ][cluster.metadata ] [node-1] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [config
I checked for bootstrap.mlockall: true too .
Elasticsearch.yml file
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: Elasticsearch
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
node.master: true
node.data: true
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.155.153.74
network.publish_host: 10.155.153.74
network.bind_host: 10.155.153.74
#
# Set a custom port for HTTP:
#
http.port: 9200
discovery.zen.ping.multicast.enabled: false
# http.cors.enabled: true
# http.cors.allow-origin: http://tvmatp326579d.ad.infosys.com:5601/
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
Can anybody tell what could be the issue? why ES is not able to lock JVM memory?
update:
Set the environment variable
ES_HEAP_SIZE
Ref: Heap sizing
question 2: (in comments)
make sure that port 9200 and 9300 are not blocked by firewall

Resources