I have been trying to install elasticsearch without success. Every single guide regardless of if I used the .deb or install from apt fails in the same way. I am using ubuntu 20.04 as an LXC on proxmox. After installing the service start fails and here is the log file
[2022-06-10T12:06:13.884+0000][22320][gc,init] CardTable entry size: 512
[2022-06-10T12:06:13.885+0000][22320][gc ] Using G1
[2022-06-10T12:06:13.957+0000][22320][gc,init] Version: 18.0.1.1+2-6 (release)
[2022-06-10T12:06:13.957+0000][22320][gc,init] CPUs: 24 total, 4 available
[2022-06-10T12:06:13.957+0000][22320][gc,init] Memory: 96593M
[2022-06-10T12:06:13.957+0000][22320][gc,init] Large Page Support: Disabled
[2022-06-10T12:06:13.957+0000][22320][gc,init] NUMA Support: Disabled
[2022-06-10T12:06:13.958+0000][22320][gc,init] Compressed Oops: Enabled (Non-zero disjoint base)
[2022-06-10T12:06:13.958+0000][22320][gc,init] Heap Region Size: 16M
[2022-06-10T12:06:13.958+0000][22320][gc,init] Heap Min Capacity: 31G
[2022-06-10T12:06:13.958+0000][22320][gc,init] Heap Initial Capacity: 31G
[2022-06-10T12:06:13.958+0000][22320][gc,init] Heap Max Capacity: 31G
[2022-06-10T12:06:13.958+0000][22320][gc,init] Pre-touch: Disabled
[2022-06-10T12:06:13.958+0000][22320][gc,init] Parallel Workers: 4
[2022-06-10T12:06:13.958+0000][22320][gc,init] Concurrent Workers: 1
[2022-06-10T12:06:13.958+0000][22320][gc,init] Concurrent Refinement Workers: 4
[2022-06-10T12:06:13.958+0000][22320][gc,init] Periodic GC: Disabled
[2022-06-10T12:06:13.958+0000][22320][gc,metaspace] CDS archive(s) not mapped
[2022-06-10T12:06:13.958+0000][22320][gc,metaspace] Compressed class space mapped at: 0x0000000080000000-0x00000000c0000000, reserved size: 1073741>
[2022-06-10T12:06:13.958+0000][22320][gc,metaspace] Narrow klass base: 0x0000000000000000, Narrow klass shift: 0, Narrow klass range: 0xc0000000
[2022-06-10T12:06:14.152+0000][22320][gc,heap,exit] Heap
[2022-06-10T12:06:14.152+0000][22320][gc,heap,exit] garbage-first heap total 32505856K, used 24578K [0x0000001001000000, 0x00000017c1000000)
[2022-06-10T12:06:14.152+0000][22320][gc,heap,exit] region size 16384K, 2 young (32768K), 0 survivors (0K)
[2022-06-10T12:06:14.152+0000][22320][gc,heap,exit] Metaspace used 3525K, committed 3584K, reserved 1114112K
[2022-06-10T12:06:14.152+0000][22320][gc,heap,exit] class space used 271K, committed 320K, reserved 1048576K
So pretty much all online guides to install elastic search are useless except this one:
https://techviewleo.com/install-elastic-stack-elk-8-on-ubuntu/
For anyone attempting to install elastic search and comes here the main thing this above guide does that the other ones dont is it disables all the security in the yaml file:
Here are the settings that worked for me
network.host: localhost
cluster.name: my-application
node.name: node-1
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
As warning this config should not be used for a publicly exposed instance
Related
Error:
Job for elasticsearch.service failed because the control process exited with error code.
sudo systemctl status elasticsearch.service:
elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-10-14 02:30:18 PKT; 4min 51s ago
/var/log/elasticsearch/elasticsearch.log:
org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is >
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67) ~[elasticsearch-8.4.3.jar:?]
etc/elasticsearch/elasticsearch.yml:
======================== Elasticsearch Configuration =========================
NOTE: Elasticsearch comes with reasonable defaults for most settings.
Before you set out to tweak and tune the configuration, make sure you
understand what are you trying to accomplish and the consequences.
The primary way of configuring a node is via this file. This template lists
the most important settings you may want to configure for a production cluster.
Please consult the documentation for further information on configuration options:
https://www.elastic.co/guide/en/elasticsearch/reference/index.html
---------------------------------- Cluster ----------------------------------
Use a descriptive name for your cluster:
cluster.name: my-application
------------------------------------ Node ------------------------------------
Use a descriptive name for the node:
node.name: node-1
Add custom attributes to the node:
node.attr.rack: r1
----------------------------------- Paths ------------------------------------
Path to directory where to store the data (separate multiple locations by comma):
path.data: /var/lib/elasticsearch
Path to log files:
path.logs: /var/log/elasticsearch
----------------------------------- Memory -----------------------------------
Lock the memory on startup:
bootstrap.memory_lock: true
Make sure that the heap size is set to about half the memory available
on the system and that the owner of the process is allowed to use this
limit.
Elasticsearch performs poorly when the system is swapping the memory.
---------------------------------- Network -----------------------------------
By default Elasticsearch is only accessible on localhost. Set a different
address here to expose this node on the network:
network.host: localhost
By default Elasticsearch listens for HTTP traffic on the first free port it
finds starting at 9200. Set a specific HTTP port here:
http.port: 9200
For more information, consult the network module documentation.
--------------------------------- Discovery ----------------------------------
Pass an initial list of hosts to perform discovery when this node is started:
The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.seed_hosts: ["host1", "host2"]
Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1", "node-2"]
For more information, consult the discovery and cluster formation module documentation.
--------------------------------- Readiness ----------------------------------
Enable an unauthenticated TCP readiness endpoint on localhost
readiness.port: 9399
---------------------------------- Various -----------------------------------
Allow wildcard deletion of indices:
action.destructive_requires_name: false
I have 2 elastic clusters (Cluster1 and Cluster2) and I am trying to configure a follower index in Cluster2 from a leader index of Cluster1.
I have followed the next steps:
Add Cluster1 as remote cluster in Cluster2.
Configuration image
Configure the next users:
In Cluster1 user "cross-cluster-user" with the role "remote-replication".
cross-cluster-user configuration image
In Cluster2 user "cross-cluster-user" with the role "remote-replication".
cross-cluster-user configuration image
When I try to create a follower index of "newblogs" index, I have the next error:
Can't create follower index no such index [newblogs]
index_not_found_exception: no such index [newblogs]
Error image
The newblogs index exists in Cluster1:
Get index result
My elasticsearch version is 8.3.3.
Any help will be appreciated.
Best regards.
elasticsearch.yml (Cluster1)
cluster.name: elastic-lab
node.name: ${HOSTNAME}
network.host: _eth1_
cluster.initial_master_nodes: ["node1"]
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 28-08-2022 15:46:47
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
elasticsearch.yml (Cluster2)
cluster.name: elastic-lab2
node.name: ${HOSTNAME}
network.host: _eth1_
cluster.initial_master_nodes: ["node1"]
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 28-08-2022 16:07:28
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
It was a very silly problem...I have 2 Clusters (Cluster1 and Cluster2) and each consists in 1 elasticsearch node (hostname: node1) and 1 kibana virtual machine (hostname: node4). "Node1" has different IP addresses in each cluster, but when I configured node1 (of Cluster1) as a seed node, the resolution of the name "node1" was the IP address of the node1 from Cluster2. This was the reason the remote cluster appeared as connected, it was connected to its own node1!
I have configured the seed node by IP (instead of hostname) and it seems to work! I had to change the option "verification_mode" in elasticsearch.yml of all nodes to the value "none" (because I was having SSL issues and this is only a lab).
Note: I didnĀ“t have to configure any user or role for the clusters to connect, even though the documentation says so.
Best regards.
I am moving from elasticsearch 2.x to 5.x and facing this problem while starting up.
[2017-02-28T14:38:24,490][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-02-28T14:38:24,494][ERROR][o.e.b.Bootstrap ] [node1] node validation exception
bootstrap checks failed
max file descriptors [8192] for elasticsearch process is too low, increase to at least [65536]
max size virtual memory [52729364480] for user [elastic] is too low, increase to [unlimited]
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
My yml looks as follows
node.name: node1
network.host: _global_
I have downloaded the elasticsearch tarball and OS is SLES11Sp4
Elasticsearch bootstrap checks are defined here. But after playing around with /etc/security/limits.conf and /etc/sysctl.conf, still i cant bring it up.
In rpm installation, these things are supposedly taken care automatically.
This is the setting which finally worked.
/etc/security/limits.conf
* hard memlock unlimited
* soft memlock unlimited
* hard nofile 65536
* soft nofile 65536
* - as unlimited
/etc/sysctl.conf
fs.file-max = 2097152
vm.max_map_count = 262144
vm.swappiness = 1
elasticsearch.yml
cluster.name: atul-es-kerberos
node.name: node1
network.host: _eth0:ipv4_
discovery.zen.ping.unicast.hosts: ["atul.labs.com"]
discovery.zen.minimum_master_nodes: 1
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
1. Linux
The vm.max_map_count=262144 setting should be set permanently in /etc/sysctl.conf
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
OR
sysctl -w vm.max_map_count=262144
For Docker on Windows, you should:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
I've been reading a lot about this issue in here and other websites, but I haven't manage to find a proper solution on how to increase the images size limit which is set to 10GB by default.
A bit of background informations.
I'm building a docker container:
https://bitbucket.org/efestolab/docker-buildgaffer
Which download and builds a consistent set of libraries on top of a centos image. (takes a horrible amount of time and space to build)
The problem is that every single time I try to build it I hit this error :
No space left on device
Docker version:
Docker version 1.7.1, build 786b29d
Docker Info :
Containers: 1
Images: 76
Storage Driver: devicemapper
Pool Name: docker-8:7-12845059-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 11.28 GB
Data Space Total: 107.4 GB
Data Space Available: 96.1 GB
Metadata Space Used: 10.51 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.137 GB
Udev Sync Supported: false
Deferred Removal Enabled: false
Data loop file: /home/_varlibdockerfiles/devicemapper/devicemapper/data
Metadata loop file: /home/_varlibdockerfiles/devicemapper/devicemapper/metadata
Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.15.9-031509-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 15.58 GiB
Name: hdd-XPS-15-9530
ID: 2MEF:IYLS:MCN5:AR5O:6IXJ:3OB3:DGJE:ZC4N:YWFD:7AAB:EQ73:LKXQ
Username: efesto
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
After stopping the service and nuking the /var/lib/docker folder,
I've been updating by docker startup script
/lib/systemd/system/docker.service
with these flags :
ExecStart=/usr/bin/docker -d --storage-opt dm.basesize=20G --storage-opt dm.loopdatasize=256G -H fd:// $DOCKER_OPTS
and restarted docker service, but still fails with the same error.
I've also been reading that might be due to the original image I'm rely on (centos:6), which might have been built with 10GB limit.
So I rebuild the centos6 image, and used that as base for mine, but I did hit the same error.
Does anyone have a reliable way to make me able to build this docker image fully ?
If there's any other informations which might be useful, just feel free to ask.
Thanks for any reply or suggestions !
L.
Found this article
Basically edit /etc/docker/daemon.json file to include
"storage-opts": [
"dm.basesize=40G"
]
Restart the docker service, and it will enable to create/import images larger than 10Gb
Thanks to the test of #user2915097, I've been updating kernel version 3.16.0, installed the kernel extras, and removed and re installed docker.
the problem seems to be addressable to devicemapper, now without any change in the docker command I get:
Containers: 0
Images: 94
Storage Driver: aufs
Root Dir: /home/_varlibdockerfiles/aufs
Backing Filesystem: extfs
Dirs: 94
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-45-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 8
Total Memory: 15.58 GiB
Name: hdd-XPS-15-9530
ID: 2MEF:IYLS:MCN5:AR5O:6IXJ:3OB3:DGJE:ZC4N:YWFD:7AAB:EQ73:LKXQ
Username: efesto
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
and it finally builds images > 10GB.
L.
Since this question has been asked, the storage driver here:
Storage Driver: devicemapper
is no longer used by default, and not recommended. That also means the settings for the 10GB limit no longer apply.
The overlay2 storage driver (currently enabled by default) does not have size limits of it's own. Instead, the underlying filesystem you use for /var/lib/docker is used for any available free space and inodes there. You can check that free space with:
df -h /var/lib/docker
df -ih /var/lib/docker
after modifing the docker daemon startup parameters do the following
systemctl daemon-reload
systemctl stop docker
rm -rf /var/lib/docker/*
systemctl start docker
This will remove all your images, make sure you save them before
eg docker save -o something.tar.gz image_name
and reload them after starting docker
eg docker load -i something.tar.gz
Just now i created a 3 node cassandra cluster on my local machines using vagrant, running cassandra 2.0.13
following is my cassandra.yaml config for each node
node0
cluster_name: 'MyCassandraCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.33.10,192.168.33.11"
listen_address: 192.168.33.10
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
node1
cluster_name: 'MyCassandraCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.33.10,192.168.33.11"
listen_address: 192.168.33.11
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
node2
cluster_name: 'MyCassandraCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.33.10,192.168.33.11"
listen_address: 192.168.33.12
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
when i run
nodetool status
i get following result
Datacenter: 168
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.33.12 88.34 KB 256 67.8% b3d6d9f2-3856-445b-bad8-97763d7b22c7 33
UN 192.168.33.11 73.9 KB 256 66.4% 67e6984b-d822-47af-b26c-f00aa39f02d0 33
UN 192.168.33.10 55.78 KB 256 65.8% 4b599ae0-dd02-4c69-85a3-05782a70569e 33
According to tutorial i have attended from datastax each node should own 33% of data but here it show each node owns around 65% of data i am not able to figure own what am i doing wrong.
I have not loaded any data in cluster nor have created any keyspace , its brand new cluster without any data.
pls help me figure out the problem
thanks
If there is no data loaded into the cluster, there shouldn't be any percentage owned. Also, your nodetool output IP addresses do not match what you put earlier for your IPs- maybe you are looking at different machines that already have data loaded? Last, you may not want to use a RackInferringSnitch since it seems that all your nodes are in the same rack. If you are just playing around in a single datacenter, you can use the simple snitch. Otherwise, NetworkTopology is good for multiple datacenters
For the Owns / Load column to be accurate in nodetool status, you need to specify a keyspace.
Try nodetool status <keyspace name> and it will actually show you the %'s for how much data is stored in each node.