I have installed Elastic search7.14 with the help of this article
The elasticsearch.yml file is below. What I believe the problem is in the configuration. I want to use elastic search with magento 2.4.3 and the recomended versioun for magento 2.4.3 is Elasticsearch 7.10
My server is linode with centos 7 and 4core cpu and 8gb of ram. I have also changed the /jvm.options with Xms and Xmx with 1024mb
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: myCluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: My First Node
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#node.data : true
#node.roles: [ master ]
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
bootstrap.system_call_filter: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discover.seed_hosts: []
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#cluster.initial_master_nodes : []
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
But I'm unable to start my service it says
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2022-06-02 21:58:16 IST; 6s ago
Docs: https://www.elastic.co
Process: 20876 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 20876 (code=exited, status=1/FAILURE)
Further debugging with journalctl -xe says
-- Unit user-1001.slice has finished shutting down.
Jun 02 21:59:01 li2147-225.members.linode.com dovecot[13248]: lmtp(21106): Connect from local
Jun 02 21:59:01 li2147-225.members.linode.com dovecot[13248]: lmtp(21106): Disconnect from local: Logged out (state=GREETING)
Jun 02 21:59:01 li2147-225.members.linode.com dovecot[13248]: imap-login: Login: user=<__cpanel__service__auth__imap__q6pfahcr7hjsl
Jun 02 21:59:01 li2147-225.members.linode.com dovecot[13248]: imap(__cpanel__service__auth__imap__q6pfahcr7hjslimd)<21120><uNTkgHng
Jun 02 21:59:01 li2147-225.members.linode.com pure-ftpd[21128]: (?#127.0.0.1) [INFO] New connection from 127.0.0.1
Jun 02 21:59:01 li2147-225.members.linode.com pure-ftpd[21128]: (?#127.0.0.1) [INFO] __cpanel__service__auth__ftpd__tUDT5E9wzfQWUKh
Jun 02 21:59:01 li2147-225.members.linode.com pure-ftpd[21128]: (__cpanel__service__auth__ftpd__tUDT5E9wzfQWUKhS#127.0.0.1) [INFO]
Jun 02 21:59:11 li2147-225.members.linode.com sshd[21168]: Invalid user mama from 165.232.141.0 port 60618
Jun 02 21:59:11 li2147-225.members.linode.com sshd[21168]: input_userauth_request: invalid user mama [preauth]
Jun 02 21:59:12 li2147-225.members.linode.com sshd[21168]: Received disconnect from 165.232.141.0 port 60618:11: Bye Bye [preauth]
Jun 02 21:59:12 li2147-225.members.linode.com sshd[21168]: Disconnected from 165.232.141.0 port 60618 [preauth]
If you want to check the logs then, the last a few lines of this is like
java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
at org.elasticsearch.systemd.Libsystemd.lambda$static$0(Libsystemd.java:23) ~[?:?]
at java.security.AccessController.doPrivileged(AccessController.java:312) ~[?:?]
at org.elasticsearch.systemd.Libsystemd.<clinit>(Libsystemd.java:22) ~[?:?]
at org.elasticsearch.systemd.SystemdPlugin.sd_notify(SystemdPlugin.java:115) ~[?:?]
at org.elasticsearch.systemd.SystemdPlugin.onNodeStarted(SystemdPlugin.java:126) ~[?:?]
at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
at org.elasticsearch.node.Node.start(Node.java:971) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:313) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:408) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116) ~[elasticsearch-cli-7.14.1.jar:7.14.1]
at org.elasticsearch.cli.Command.main(Command.java:79) ~[elasticsearch-cli-7.14.1.jar:7.14.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.14.1.jar:7.14.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81) ~[elasticsearch-7.14.1.jar:7.14.1]
Please give some advice, on how to fix this.
Thanks.
Related
I am trying to start elasticsearch after installation. It throws error
Job for elasticsearch.service failed because a fatal signal was delivered to the control process.
See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
After running: systemctl status elasticsearch.service .
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
Active: failed (Result: signal) since Mon 2021-05-17 14:30:02 IST; 1min 56s ago
Docs: https://www.elastic.co
Process: 94558 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=killed, signal=KILL)
Main PID: 94558 (code=killed, signal=KILL)
May 17 14:29:58 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Starting Elasticsearch...
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Failed with result 'signal'.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Failed to start Elasticsearch.
In journalctl -xe, I am getting this
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: Out of memory: Killed process 94558 (java) total-vm:9804148kB, anon-rss:5809744kB, file-rss:0kB, shmem-rss:0kB, UID:129 pgtables:11660kB oom_sc>
May 17 14:30:01 rohit-Lenovo-Legion-Y540-15IRH-PG0 CRON[94743]: pam_unix(cron:session): session opened for user root by (uid=0)
May 17 14:30:01 rohit-Lenovo-Legion-Y540-15IRH-PG0 CRON[94744]: (root) CMD ([ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi)
May 17 14:30:01 rohit-Lenovo-Legion-Y540-15IRH-PG0 CRON[94743]: pam_unix(cron:session): session closed for user root
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: oom_reaper: reaped process 94558 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit elasticsearch.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 9.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit elasticsearch.service has entered the 'failed' state with result 'signal'.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Failed to start Elasticsearch.
-- Subject: A start job for unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit elasticsearch.service has finished with a failure.
--
-- The job identifier is 13124 and the job result is failed.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 sudo[94552]: pam_unix(sudo:session): session closed for user root
May 17 14:31:34 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:35 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:35 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:37 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Started Run anacron jobs.
-- Subject: A start job for unit anacron.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit anacron.service has finished successfully.
--
-- The job identifier is 13197.
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 anacron[94906]: Anacron 2.3 started on 2021-05-17
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 anacron[94906]: Normal exit (0 jobs run)
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: anacron.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit anacron.service has successfully entered the 'dead' state.
My ES Configuration
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Please help me to resolve this issue.
I want to set password to my elasticsearch. I have not paid or started my free-trial so I guess I am using basic plan as default.
I have followed official guide to install elasticsearch on ubuntu EC2.
I don't think I have installed OSS version but when i run:
/usr/share/elasticsearch$ sudo bin/elasticsearch-plugin list --verbose
Plugins directory: /usr/share/elasticsearch/plugins
it does not print xpack.
I tried Removing and Installing ElasticSearch clean just in case i have set something wrong.
Only thing I did to my elasticsearch.yml is adding: xpack.security.enabled:true
However starting elastic search by systemctl start elasticsearch.service outputs this error message:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-11-13 02:53:44 UTC; 9min ago
Docs: http://www.elastic.co
Process: 20330 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 20330 (code=exited, status=1/FAILURE)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:557)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: ... 13 more
Nov 13 02:53:44 ip-172-31-47-37 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 02:53:44 ip-172-31-47-37 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Nov 13 02:53:44 ip-172-31-47-37 systemd[1]: Failed to start Elasticsearch
Also after I added xpack.security.enabled:true, listing plugin shows this error message:
/usr/share/elasticsearch$ sudo bin/elasticsearch-plugin list --verbose
Exception in thread "main" SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: MarkedYAMLException[while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
at [Source: sun.nio.ch.ChannelInputStream#6155d082; line: 37, column: 34]]; nested: ScannerException[while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
];
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1097)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1070)
at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:83)
at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:95)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:77)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:47)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
at [Source: sun.nio.ch.ChannelInputStream#6155d082; line: 37, column: 34]
at com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException.from(MarkedYAMLException.java:27)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:343)
at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:52)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:645)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:620)
at org.elasticsearch.common.settings.Settings.access$400(Settings.java:82)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1093)
... 9 more
Caused by: while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:557)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
... 14 more
Here's my elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled:true
What do i need to do to successfully launch elastic search?
Thank you in advance
Try to add a space after the colon for your xpack setting.
xpack.security.enabled: true
Syntax on yaml files can be pretty specific.
https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
"A dictionary is represented in a simple key: value form (the colon must be followed by a space)"
I am trying to start elastic search with private ip address but it does not get started it shows some errors in error log which i have shared below.
elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#Also Tried with Private IP Address network.host: 52.50.122.93
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["52.50.122.93", "127.0.0.1", "[::1]"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
elasticsearch.log
[2019-05-21T17:22:28,068][ERROR][o.e.b.Bootstrap ] [WIN-CQKBIA6F350] Exception
java.lang.IllegalStateException: failed to obtain node locks, tried [[C:\ELKStack\elasticsearch-7.1.0\data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:297) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.1.0.jar:7.1.0]
[2019-05-21T17:22:28,085][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [WIN-CQKBIA6F350] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[C:\ELKStack\elasticsearch-7.1.0\data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.1.0.jar:7.1.0]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[C:\ELKStack\elasticsearch-7.1.0\data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:297) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.1.0.jar:7.1.0]
... 6 more
you need to set one of these values
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ex discovery.seed_hosts:
- 192.168.1.10:9300
- 192.168.1.11
- seeds.mydomain.com
The error is clear in the log file
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
You have to set the cluster.initial_master_node or discovery.seed_hosts setting
Also don't forget to set the node.name and the cluster.name, you can also start ES and set the master node with this command-line:
bin/elasticsearch -Ecluster.initial_master_nodes=master-a,master-b,master-c
https://www.elastic.co/guide/en/elasticsearch/reference/master/important-settings.html
https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-bootstrap-cluster.html
https://www.elastic.co/guide/en/elasticsearch/reference/master/discovery-settings.html
Here's my config
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /mulelogs/elasticsearch
path.logs: /mulelogs/elasticsearch
When I restart ElasticSearch this is what I get:
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2016-01-25 06:33:40 UTC; 9s ago
Docs: http://www.elastic.co
Process: 22213 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=${PID_DIR}/elasticsearch.pid -Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 22212 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 22213 (code=exited, status=1/FAILURE)
elasticsearch[22213]: at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1074)
elasticsearch[22213]: at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1061)
elasticsearch[22213]: at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:88)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:217)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:256)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
elasticsearch[22213]: Refer to the log for complete error details.
systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Unit elasticsearch.service entered failed state.
systemd[1]: elasticsearch.service failed.
The path is an attached volume which is accessible via /mulelogs/
drwxrwxrwx. 4 root root 4096 Jan 25 05:11 .
dr-xr-xr-x. 18 root root 4096 Jan 25 06:24 ..
drwxrwxrwx. 4 elasticsearch elasticsearch 4096 Jan 25 05:21 elasticsearch
drwxrwxrwx. 2 root root 16384 Jan 20 01:20 lost+found
I tried chown and chmod just to see if the permission is the problem, but it still didn't work.
How do I fix this?
Thanks in ad
Notes:
OS: CentOS 7
ElasticSearch : 2.1
I have installed ELK following this steps:
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
try changing the paths
path.data: /mulelogs/elasticsearch
path.logs: /mulelogs/elasticsearch
to absolute
I had a fresh install and had the same error.
Check if you have a folder in your path.data directory with the name of your cluster. If yes, try to delete it (if possible and you don't loose data).
After deleting this and restarting the service it went ok (another folder called nodes was created)
change mode to 777 for new lib and log directories and files.
check the log file, if it shows error message like:
java.lang.IllegalStateException: detected index data in
default.path.data [/var/lib/elasticsearch] where there should not be
any; check the logs for details
as the above error, you have to delete the nodes directory in old lib folder. (Backup first, index data will be gone.)
I'm following http://jayatiatblogs.blogspot.com/2011/11/storm-installation.html & http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup to set up Apache Storm cluster in Ubuntu 14.04 LTS at AWS EC2.
My master node is 10.0.0.185.
My slave nodes are 10.0.0.79, 10.0.0.124 & 10.0.0.84 with myid of 1, 2 and 3 in their zookeeper-data respectively. I set up an ensemble of Apache Zookeeper consists of all the 3 slave nodes.
Below are my zoo.cfg for my slave nodes:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/ubuntu/zookeeper-data
clientPort=2181
server.1=10.0.0.79:2888:3888
server.2=10.0.0.124:2888:3888
server.3=10.0.0.84:2888:3888
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
Below are my storm.yaml for my slave nodes:
########### These MUST be filled in for a storm configuration
storm.zookeeper.server:
- "10.0.0.79"
- "10.0.0.124"
- "10.0.0.84"
# - "localhost"
storm.zookeeper.port: 2181
# nimbus.host: "localhost"
nimbus.host: "10.0.0.185"
storm.local.dir: "/home/ubuntu/storm/data"
java.library.path: "/usr/lib/jvm/java-7-oracle"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
#
# worker.childopts: "-Xmx768m"
# nimbus.childopts: "-Xmx512m"
# supervisor.childopts: "-Xmx256m"
#
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
# - "server1"
# - "server2"
## Metrics Consumers
# topology.metrics.consumer.register:
# - class: "backtype.storm.metric.LoggingMetricsConsumer"
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
Below are the storm.yaml for my master node:
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "10.0.0.79"
- "10.0.0.124"
- "10.0.0.84"
# - "localhost"
#
storm.zookeeper.port: 2181
nimbus.host: "10.0.0.185"
# nimbus.thrift.port: 6627
# nimbus.task.launch.secs: 240
# supervisor.worker.start.timeout.secs: 240
# supervisor.worker.timeout.secs: 240
ui.port: 8772
# nimbus.childopts: "‐Xmx1024m ‐Djava.net.preferIPv4Stack=true"
# ui.childopts: "‐Xmx768m ‐Djava.net.preferIPv4Stack=true"
# supervisor.childopts: "‐Djava.net.preferIPv4Stack=true"
# worker.childopts: "‐Xmx768m ‐Djava.net.preferIPv4Stack=true"
storm.local.dir: "/home/ubuntu/storm/data"
java.library.path: "/usr/lib/jvm/java-7-oracle"
# supervisor.slots.ports:
# - 6700
# - 6701
# - 6702
# - 6703
# - 6704
# worker.childopts: "-Xmx768m"
# nimbus.childopts: "-Xmx512m"
# supervisor.childopts: "-Xmx256m"
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
# - "server1"
# - "server2"
## Metrics Consumers
# topology.metrics.consumer.register:
# - class: "backtype.storm.metric.LoggingMetricsConsumer"
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
I start my zookeeper in all my slave nodes, then start my storm nimbus in my master node, then start storm supervisor in all my slave nodes. However, when I view in my Storm UI, there is only 1 supervisor with total 5 slots in the cluster summary & only 1 supervisor information in the supervisor summary, why so?
How many slave nodes is actually working if I submit a topology in this case?
Why it is not 3 supervisors with total 15 slots?
What should I do in order to have 3 supervisors?
When I check in the supervisor.log in the slave nodes, the causes is as below:
2015-05-29T09:21:24.185+0000 b.s.d.supervisor [INFO] 5019754f-cae1-4000-beb4-fa0
16bd1a43d still hasn't started
What you are doing perfect and its works too.
The only thing you should change is your storm.dir. It is same in the slave and the master nodes just change the path in the storm.dir path in nimbus & supervisor nodes (don't use same local path). When you use same local path the nimbus and supervisor share same id. They come into play but you don’t see 8 slots they just show you 4 slots as workers.
Change the (storm.local.dir:/home/ubuntu/storm/data) and don`t use same path in supervisor and nimbus.
Are you referring to Nimbus as the master node?
Generally, Zookeeper cluster should be started first and then nimbus and then the supervisors. Zookeeper and Nimbus should be always available for the Storm cluster to function correctly.
You should check the supervisor logs to check for the failures. The Nimbus host and the Zookeeper machines should be accessible from to Supervisor machines.