I'm working on presto on spark. I have Elasticsearch as datasource. Im not able to run the queries using presto.
Elasticsearch.properties -
elasticsearch.ignore-publish-address=true
elasticsearch.default-schema-name=default
elasticsearch.host=localhost
connector.name=elasticsearch
elasticsearch.port=2900
docker-compose.yaml
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- network.host=0.0.0.0
ports:
- '9200:9200'
networks:
- pqp-net
networks:
pqp-net:
driver: bridge
I'm getting below error -
*c.f.p.e.client.ElasticsearchClient - Error refreshing nodes
com.facebook.presto.spi.PrestoException: Connection refused*
Well, Im able to fetch the details of Elasticsearch :
http://localhost:9200
{ "name" : "ab751e0dd0ad", "cluster_name" : "docker-cluster", "cluster_uuid" : "3T66bOexSGOo6Pwtt2Ul4Q", "version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
If anyone faced same issue, please help.
Thanks in advance
Resolved: I had an issue with port number mentioned in my application. By changing the port ( some other port ) I was able to connect to Elasticsearch.
Related
trying to set up elastic exporter for a local stack :
https://github.com/prometheus-community/elasticsearch_exporter
I get a connection refused when running with a docker even when xpack security is disabled.
xpack.license.self_generated.type: basic
xpack.security.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters.my_local_exporter:
type: local
bootstrap.memory_lock: true
search.allow_expensive_queries: true
indices.memory.index_buffer_size: 30%
I use elastic 8.1.2 :
{
"name" : "0a166124ca20",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "xxxxxxxxxxxxxxx",
"version" : {
"number" : "8.1.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "xxxxxxxxxxxxxxx",
"build_date" : "2022-03-29T21:18:59.991429448Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
I get limited amount of metrics but missing the majoriy
for example --> elasticsearch_node_stats_up 0
here is my docker-compose:
elasticsearch_exporter:
image: quay.io/prometheuscommunity/elasticsearch-exporter:latest
command:
- '--es.uri=http://localhost:9200'
- '--es.ssl-skip-verify'
- '--es.all'
restart: always
environment:
- 'ES_API_KEY=Apikey xxxxxxxxxx'
ports:
- "9114:9114"
I am following reindex documentation here: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
I have two Elasticsearch indexes in localhost:
192.168.0.100:9200
{
"name" : "fses01",
"cluster_name" : "fs-es-cluster",
"cluster_uuid" : "DqVDBBafRaO9UAJPKuc_xQ",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1",
"build_date" : "2021-07-29T20:49:32.864135063Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
With the following index:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open formshare_records NpEFzsGvTEmivnYjJhwHJg 5 1 51343 0 7.8mb 3.9mb
192.168.0.100:9201
{
"name" : "es01",
"cluster_name" : "es-docker-cluster",
"cluster_uuid" : "bvsRXLwZRtKuWIrIuKx0hg",
"version" : {
"number" : "7.14.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "6bc13727ce758c0e943c3c21653b3da82f627f75",
"build_date" : "2021-09-15T10:18:09.722761972Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
With the same index but blank:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open formshare_records uxvizIAIS82YBHSsvpq4-Q 5 1 0 0 2kb 1kb
I am doing a reindex using CULR with the following:
curl -X POST "192.168.0.100:9201/_reindex" -H 'Content-Type: application/json' -d'
{
"source": {
"remote": {
"host": "http://192.168.0.100:9200"
},
"index": "formshare_records",
"query": {
"match_all": {}
}
},
"dest": {
"index": "formshare_records"
}
}
'
But I get zero transfer:
{"took":34,"timed_out":false,"total":0,"updated":0,"created":0,"deleted":0,"batches":0,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1.0,"throttled_until_millis":0,"failures":[]}
The index definition is the same on both sides:
{"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"project_id": {"type": "keyword"},
"form_id": {"type": "keyword"},
"schema": {"type": "keyword"},
"table": {"type": "keyword"}
}
}}
Both of them are under docker:
192.168.0.100:9200
version: '3'
services:
fses01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: fses01
environment:
- node.name=fses01
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses02
- cluster.initial_master_nodes=fses01,fses02
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
- cluster.max_shards_per_node=20000
- script.max_compilations_rate=10000/1m
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch-docker/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
fses02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: fses02
environment:
- node.name=fses02
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses01
- cluster.initial_master_nodes=fses01,fses02
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
- cluster.max_shards_per_node=20000
- script.max_compilations_rate=10000/1m
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch-docker/data2:/usr/share/elasticsearch/data
networks:
- esnet
kib01:
image: docker.elastic.co/kibana/kibana:7.14.0
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://fses01:9200
ELASTICSEARCH_HOSTS: '["http://fses01:9200","http://fses02:9200"]'
networks:
- esnet
networks:
esnet:
192.168.0.100:9201
version: '3.7'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- reindex.remote.whitelist=192.168.0.100:9200
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch710/data:/usr/share/elasticsearch/data
ports:
- 9201:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch710/data2:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch710/data3:/usr/share/elasticsearch/data
networks:
- elastic
networks:
elastic:
driver: bridge
The container of 9201 can see the index of 9200:
sudo docker exec -it es01 /bin/bash
[root#943159977b17 elasticsearch]# curl -X GET "192.168.0.100:9200/_cat/indices/formshare_records?v&s=index&pretty"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open formshare_records NpEFzsGvTEmivnYjJhwHJg 5 1 51343 0 7.8mb 3.9mb
curl -X POST "192.168.0.100:9201/_reindex" -H 'Content-Type: application/json' -d'
{
"source": {
"remote": {
"host": "http://192.168.0.100:9200"
},
"index": "formshare_records",
"query": {
"match_all": {}
}
},
"dest": {
"index": "formshare_records"
}
}
'
{"took":18,"timed_out":false,"total":0,"updated":0,"created":0,"deleted":0,"batches":0,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1.0,"throttled_until_millis":0,"failures":[]}
Any idea why reindex does not work? I tried without query but I get the same!
In Elastic, you can create roles. For the same index, I would like to create a role to display some fields and for another role hidden some fields.
For that, I found that in the doc 'field_security'.
https://www.elastic.co/guide/en/elastic-stack-overview/7.3/field-level-security.html
Currently I use an Elastic + Kibana version 7.3.1 in a Docker container
My request for create role is :
POST /_security/role/myNewRole
{
"cluster": ["all"],
"indices": [
{
"names": [ "twitter" ],
"privileges": ["all"],
"field_security" : {
"grant" : [ "user", "password" ]
}
}
]
}
And response is :
{
"error": {
"root_cause": [
{
"type": "security_exception",
"reason": "current license is non-compliant for [field and document level security]",
"license.expired.feature": "field and document level security"
}
],
"type": "security_exception",
"reason": "current license is non-compliant for [field and document level security]",
"license.expired.feature": "field and document level security"
},
"status": 403
}
I checked the license with request :
{
"license" : {
"status" : "active",
"uid" : "864f625a-fc7a-41de-91f3-c4a64e045a55",
"type" : "basic",
"issue_date" : "2019-09-10T10:04:38.150Z",
"issue_date_in_millis" : 1568109878150,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}
My docker-file
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- ELASTIC_PASSWORD=toto
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.type=single-node"
- "xpack.security.enabled=true"
- "xpack.security.dls_fls.enabled=true"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
networks:
- net
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.3.1
environment:
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=toto
ports:
- "5601:5601"
networks:
- net
volumes:
esdata1:
driver: local
networks:
net:
How to fix this licensing problem ?
Thanks
Even though basic security features are free with a BASIC license, "field and document level security" are only available to Platinum-level users... and to Elastic Cloud users.
So the most simple and not too costly way of getting this feature is to subscribe to Elastic Cloud.
I have just started to learn about the ELK stack. I am referring to this site
https://www.elastic.co/guide/en/elastic-stack-get-started/6.4/get-started-elastic-stack.html
for installing the ELK stack in my system I have a problem when I try to start Kibana in my windows system. I get the following error
\log [13:36:52.255] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:52.277] [warning][admin][elasticsearch] No living connections
log [13:36:52.279] [warning][task_manager] PollError No Living connections
log [13:36:53.810] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:53.836] [warning][admin][elasticsearch] No living connections
log [13:36:56.456] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:56.457] [warning][admin][elasticsearch] No living connections
log [13:36:56.458] [warning][task_manager] PollError No Living connections
log [13:36:57.348] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:57.349] [warning][admin][elasticsearch] No living connections
I think it is having a problem fetching the Elastic Search connection. But I think the elastic search instance has been started successfully. When I run
./bin/elasticsearch.bat
I get the following results
[2019-09-01T18:34:11,594][INFO ][o.e.h.AbstractHttpServerTransport] [DESKTOP-TD85D7S] publish_address {192.168.0.101:9200}, bound_addresses {192.168.99.1:9200}, {192.168.56.1:9200}, {192.168.0.101:9200}
[2019-09-01T18:34:11,595][INFO ][o.e.n.Node ] [DESKTOP-TD85D7S] started
In your kibana.yml configuration file, you need to change the following line:
elasticsearch.hosts: ["http://localhost:9200"]
to
elasticsearch.hosts: ["http://192.168.0.101:9200"]
Note: Elasticsearch 7.4.0, Kibana 7.4.0
status: working.
I am using a docker-compose.yml file to run elasticsearch and kibana on localhost. port 9200 is being used by another service so, I have mapped 9201:9200 (9201 of localhost with 9200 of docker container)
In kibana environment variable we are setting elasticsearch host and port (port should be of container port) eg. ELASTICSEARCH_HOSTS=http://elasticsearch:9200
File: docker-compose.yml
version: '3.7'
services:
# Elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9201:9200
- 9300:9300
# Kibana
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
driver: local
Elastic search is running at http://localhost:9201, you will get similar to
{
"name" : "d0bb78764b7e",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Djch5nbnSWC-EqYawp2Cng",
"version" : {
"number" : "7.4.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
"build_date" : "2019-09-27T08:36:48.569419Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana is running at http://localhost:5601, open in the browser.
Note: if your docker is running on some server other than your local machine, then replace localhost, with that server host
I found the error in a log file: /var/log/elasticsearch/my-instance.log
[2022-07-25T15:59:44,049][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler]
[nextcloud] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service];
nested: AccessDeniedException[/var/lib/elasticsearch/nodes];
you have to set the bit s on the folder /var/lib/elasticsearch/nodes
# mkdir /var/lib/elasticsearch/nodes
# chown elasticsearch:elasticsearch /var/lib/elasticsearch/nodes
# chmod g+s /var/lib/elasticsearch/nodes
# ls -ltr /var/lib/elasticsearch/nodes
drwxr-sr-x 5 elasticsearch elasticsearch 4096 25 juil. 16:42 0/
you can then query localhost on port 9200.
# curl http://localhost:9200
{
"name" : "nextcloud",
"cluster_name" : "my-instance",
"cluster_uuid" : "040...V3TA",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "66b...331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
My environment: Debian11.
I installed elasticsearch by hand, by downloading the package elasticsearch-7.14.1-amd64.deb
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.1-amd64.deb
hope it helps
Elasticsearch 7.0.0 is configured like that on CentOS 7.6
:
sudo cat /etc/elasticsearch/elasticsearch.yml:
cluster.name: elk-log-elasticsearch
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
From inside server:
curl --verbose http://127.0.0.1:9200
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 525
<
{
"name" : "Cardif.software.altkom.pl",
"cluster_name" : "elk-log-elasticsearch",
"cluster_uuid" : "rTMG9hXBTk-CuA73G9KHSA",
"version" : {
"number" : "7.0.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "b7e28a7",
"build_date" : "2019-04-05T22:55:32.697037Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
From outside of this server (name it 'A'), on server 'B' i can ping server 'A':
I know that it's IP is like: 172.16.xx.x
I can enter Kibana on: http://172.16.xx.x:5601 in browser, but i can not enter
Elasticsearch page on http://172.16.xx.x:9200
How can i change config to make it work?
Ports are enabled in firewalld:
firewall-cmd --list-all
ports: 5432/tcp 80/tcp 5601/tcp 5602/tcp 9200/tcp 9201/tcp 15672/tcp 8080/tcp 8081/tcp 8082/tcp 5488/tcp
I tried:
1)
network.host : 0.0.0.0
2)
network.bind_host: 172.x.x.x
This does the trick:
network.host: 0.0.0.0
discovery.seed_hosts: 127.0.0.1