How to get connection from node-data to master in elasticsearch? - elasticsearch

I checked all documentations in internet by googling. I tried to bind node-data and master node. But I realized that there is an error in my logs;
Eror: if I check "192.168.5.84" logs; below error occurs.
[node1] not enough master nodes discovered during pinging (found [[]], but needed [1])
2017-08-16 13:37:38 Commons Daemon procrun stdout initialized
[2017-08-16T13:37:43,253][INFO ][o.e.n.Node ] [node1] initializing ...
[2017-08-16T13:37:43,346][INFO ][o.e.e.NodeEnvironment ] [node1] using [1] data paths, mounts [[(C:)]], net usable_space [10.7gb], net total_space [39.6gb], spins? [unknown], types [NTFS]
[2017-08-16T13:37:43,346][INFO ][o.e.e.NodeEnvironment ] [node1] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-08-16T13:37:43,472][INFO ][o.e.n.Node ] [node1] node name [node1], node ID [81pArkMqSUuBVnKwny1Blw]
[2017-08-16T13:37:43,472][INFO ][o.e.n.Node ] [node1] version[5.4.1], pid[7632], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Windows Server 2012 R2/6.3/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-08-16T13:37:43,472][INFO ][o.e.n.Node ] [node1] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\elk\elasticsearch, -Des.default.path.logs=C:\elk\elasticsearch\logs, -Des.default.path.data=C:\elk\elasticsearch\data, -Des.default.path.conf=C:\elk\elasticsearch\config, exit, -Xms2048m, -Xmx2048m, -Xss1024k]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [aggs-matrix-stats]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [ingest-common]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-expression]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-groovy]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-mustache]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-painless]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [percolator]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [reindex]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [transport-netty3]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [transport-netty4]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] no plugins loaded
[2017-08-16T13:37:50,987][INFO ][o.e.d.DiscoveryModule ] [node1] using discovery type [zen]
[2017-08-16T13:37:52,347][INFO ][o.e.n.Node ] [node1] initialized
[2017-08-16T13:37:52,347][INFO ][o.e.n.Node ] [node1] starting ...
[2017-08-16T13:37:53,190][INFO ][o.e.t.TransportService ] [node1] publish_address {192.168.5.84:9300}, bound_addresses {192.168.5.84:9300}
[2017-08-16T13:37:53,206][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-08-16T13:37:56,362][WARN ][o.e.d.z.ZenDiscovery ] [node1] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
[2017-08-16T13:37:59,378][WARN ][o.e.d.z.ZenDiscovery ] [node1] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
[2017-08-16T13:38:02,394][WARN ][o.e.d.z.ZenDiscovery ] [node1] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
My Master : (it is working perfect!) 10.180.11.82
cluster.name: elasticsearch
node.name: "lmaster"
node.master: true
node.data: true
network.host: 10.180.11.82
http.port: 333
#network.bind_host: ["192.168.5.84"]
#discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.unicast.hosts: ["10.180.11.82:333"]
My data Node : (Above error occurs here) 192.168.5.84
network.host: 192.168.5.84
http.port: 333
cluster.name: elasticsearch
node.name: "node1"
node.master: false
node.data: true
#discovery.zen.ping.unicast.hosts: ["10.180.11.82:333"]
#network.bind_host: 10.180.11.82
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ["10.180.11.82:333"]

Your settings are all over the place. Start with something simpler, like the following, make sure it works and then, if not happy with what ports/IPs are used, start changing:
Node 10.180.11.82:
cluster.name: elasticsearch
node.name: "lmaster"
node.master: true
node.data: true
network.host: 10.180.11.82
discovery.zen.ping.unicast.hosts: ["192.168.5.84:9300"]
discovery.zen.minimum_master_nodes: 1
Node 192.168.5.84:
cluster.name: elasticsearch
node.name: "node1"
node.master: false
node.data: true
network.host: 192.168.5.84
discovery.zen.ping.unicast.hosts: ["10.180.11.82:9300"]
discovery.zen.minimum_master_nodes: 1

Related

logstash 7.9.1 docker conatiner: file input is not working

I am trying to read a log file but is not working, it works when the logstash.conf is configure to listen in the port 5000 but from a file not works. I am using logstash version 7.9.1 from docker container and trying to sent the logs to Elastic search 7.9.1.
This is my logstash.conf file
input {
file {
path => ["/home/douglas/projects/incollect/*.log"]
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "test-elk-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
stdout {
codec => rubydebug
}
}
this is the logs from the console,I can't see any error and says Successfully started
logstash_1 | [2020-10-16T00:38:27,748][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2020-10-16T00:38:27,795][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2020-10-16T00:38:27,798][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x44d5fe run>"}
logstash_1 | [2020-10-16T00:38:27,800][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x4c6dee32 run>"}
logstash_1 | [2020-10-16T00:38:27,840][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2020-10-16T00:38:28,535][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.73}
logstash_1 | [2020-10-16T00:38:28,599][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2020-10-16T00:38:28,600][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.8}
logstash_1 | [2020-10-16T00:38:28,840][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2020-10-16T00:38:28,909][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2020-10-16T00:38:28,920][INFO ][filewatch.observingtail ][main][4a3eb924128694e00dae8e6fab084bfc5e3c3692e66663362019b182fcb31a48] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2020-10-16T00:38:29,386][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
and this is my log file:
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 0
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 1 - inquire_id:14219
Edited****************
I am adding the docker-compose file, this is my configuration to logstash
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
volumes:
- ./../../:/usr/share/logstash
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
I am not sure what is the problem, I tried differents solutions but it does not works.
If this is - ./../../:/usr/share/logstash what you are using to mount the logs volume, your logstash file input path should point to /usr/share/logstash/*.log

Elasticsearch: Data server not discovering master

I have a 1 data - 1 master es cluster. (using 6.4.2 on CentOS 7)
On my master01:
==> /opt/elasticsearch/logs/master01-elastic.my-local-domain-master01-elastic/esa-local-stg-cluster.log <==
[2019-02-08T11:06:21,267][INFO ][o.e.n.Node ] [master01-elastic] initialized
[2019-02-08T11:06:21,267][INFO ][o.e.n.Node ] [master01-elastic] starting ...
[2019-02-08T11:06:21,460][INFO ][o.e.t.TransportService ] [master01-elastic] publish_address {10.18.0.13:9300}, bound_addresses {10.18.0.13:9300}
[2019-02-08T11:06:21,478][INFO ][o.e.b.BootstrapChecks ] [master01-elastic] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-02-08T11:06:24,543][INFO ][o.e.c.s.MasterService ] [master01-elastic] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{GH9oflu7QZuJB_U7sPJDlg}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}
[2019-02-08T11:06:24,550][INFO ][o.e.c.s.ClusterApplierService] [master01-elastic] new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{GH9oflu7QZuJB_U7sPJDlg}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}, reason: apply cluster state (from master [master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{GH9oflu7QZuJB_U7sPJDlg}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2019-02-08T11:06:24,575][INFO ][o.e.h.n.Netty4HttpServerTransport] [master01-elastic] publish_address {10.18.0.13:9200}, bound_addresses {10.18.0.13:9200}
[2019-02-08T11:06:24,575][INFO ][o.e.n.Node ] [master01-elastic] started
[2019-02-08T11:06:24,614][INFO ][o.e.l.LicenseService ] [master01-elastic] license [c2004733-fa30-4249-bb07-d5f2238816ad] mode [basic] - valid
[2019-02-08T11:06:24,615][INFO ][o.e.g.GatewayService ] [master01-elastic] recovered [0] indices into cluster_state
[root#master01-elastic ~]# systemctl status elasticsearch
● master01-elastic_elasticsearch.service - Elasticsearch-master01-elastic
Loaded: loaded (/usr/lib/systemd/system/master01-elastic_elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-02-08 11:06:12 EST; 2 days ago
Docs: http://www.elastic.co
Main PID: 18695 (java)
CGroup: /system.slice/master01-elastic_elasticsearch.service
├─18695 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Djava.awt.headless=true -Dfile.encoding...
└─18805 /usr/share/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
Feb 08 11:06:12 master01-elastic systemd[1]: Started Elasticsearch-master01-elastic.
[root#master01-elastic ~]# ss -tula | grep -i 9300
[root#master01-elastic ~]#
cluster logs on my master01:
[2019-02-11T02:36:21,406][INFO ][o.e.n.Node ] [master01-elastic] initialized
[2019-02-11T02:36:21,406][INFO ][o.e.n.Node ] [master01-elastic] starting ...
[2019-02-11T02:36:21,619][INFO ][o.e.t.TransportService ] [master01-elastic] publish_address {10.18.0.13:9300}, bound_addresses {10.18.0.13:9300}
[2019-02-11T02:36:21,654][INFO ][o.e.b.BootstrapChecks ] [master01-elastic] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-02-11T02:36:24,813][INFO ][o.e.c.s.MasterService ] [master01-elastic] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{Vgq60hVVRn-3aO_uBuc2uQ}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}
[2019-02-11T02:36:24,818][INFO ][o.e.c.s.ClusterApplierService] [master01-elastic] new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{Vgq60hVVRn-3aO_uBuc2uQ}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}, reason: apply cluster state (from master [master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{Vgq60hVVRn-3aO_uBuc2uQ}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2019-02-11T02:36:24,856][INFO ][o.e.h.n.Netty4HttpServerTransport] [master01-elastic] publish_address {10.18.0.13:9200}, bound_addresses {10.18.0.13:9200}
[2019-02-11T02:36:24,856][INFO ][o.e.n.Node ] [master01-elastic] started
[2019-02-11T02:36:24,873][INFO ][o.e.l.LicenseService ] [master01-elastic] license [c2004733-fa30-4249-bb07-d5f2238816ad] mode [basic] - valid
[2019-02-11T02:36:24,875][INFO ][o.e.g.GatewayService ] [master01-elastic] recovered [0] indices into cluster_state
This makes master undiscoverable so in my data01
[2019-02-11T02:24:09,882][WARN ][o.e.d.z.ZenDiscovery ] [data01-elastic] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
Also on my data01
[root#data01-elastic ~]# cat /etc/elasticsearch/data01-elastic/elasticsearch.yml | grep -i zen
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: 10.18.0.13:9300
[root#data01-elastic ~]# ping 10.18.0.13
PING 10.18.0.13 (10.18.0.13) 56(84) bytes of data.
64 bytes from 10.18.0.13: icmp_seq=1 ttl=64 time=0.171 ms
64 bytes from 10.18.0.13: icmp_seq=2 ttl=64 time=0.147 ms
How can I further troubleshoot this?
The cluster was deployed using these ansible scripts:
with this configuration for the master:
- hosts: masters
tasks:
- name: Elasticsearch Master Configuration
import_role:
name: elastic.elasticsearch
vars:
es_instance_name: "{{ ansible_hostname }}"
es_data_dirs:
- "{{ data_dir }}"
es_log_dir: "/opt/elasticsearch/logs"
es_config:
node.name: "{{ ansible_hostname }}"
cluster.name: "{{ cluster_name }}"
discovery.zen.ping.unicast.hosts: "{% for host in groups['masters'] -%}{{ hostvars[host]['ansible_ens33']['ipv4']['address'] }}:9300{% if not loop.last %},{% endif %}{%- endfor %}"
http.port: 9200
transport.tcp.port: 9300
node.data: false
node.master: true
bootstrap.memory_lock: true
network.host: '{{ ansible_facts["ens33"]["ipv4"]["address"] }}'
discovery.zen.minimum_master_nodes: 1
es_xpack_features: []
es_scripts: false
es_templates: false
es_version_lock: true
es_heap_size: 2g
es_api_port: 9200
and this for the data
- hosts: data
tasks:
- name: Elasticsearch Data Configuration
import_role:
name: elastic.elasticsearch
vars:
es_instance_name: "{{ ansible_hostname }}"
es_data_dirs:
- "{{ data_dir }}"
es_log_dir: "/opt/elasticsearch/logs"
es_config:
node.name: "{{ ansible_hostname }}"
cluster.name: "{{ cluster_name }}"
discovery.zen.ping.unicast.hosts: "{% for host in groups['masters'] -%}{{ hostvars[host]['ansible_ens33']['ipv4']['address'] }}:9300{% if not loop.last %},{% endif %}{%- endfor %}"
http.port: 9200
transport.tcp.port: 9300
node.data: true
node.master: false
bootstrap.memory_lock: true
network.host: '{{ ansible_facts["ens33"]["ipv4"]["address"] }}'
discovery.zen.minimum_master_nodes: 1
es_xpack_features: []
es_scripts: false
es_templates: false
es_version_lock: true
es_heap_size: 6g
es_api_port: 9200
The 2 VMs I was trying to establish communication among were Centos7 which has firewalld enabled by default.
Disabling and stopping the service solved the issue.

Elasticsearch connection refused while kibana is trying to connect

I am trying to run ELK stack using docker container. But I am getting error that kibana is unable to make connection with elasticsearch.
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["error","elasticsearch","admin"],"pid":12,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:console#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:elasticsearch#5.6.9","error"],"pid":12,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:metrics#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-22T19:31:38,182][INFO ][o.e.d.DiscoveryModule ] [g8HPieb] using discovery type [zen]
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:timelion#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["listening","info"],"pid":12,"message":"Server running at http://0.0.0.0:5601"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","ui settings","error"],"pid":12,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node ] initialized
elasticsearch_1 | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node ] [g8HPieb] starting ...
elasticsearch_1 | [2018-06-22T19:31:38,767][INFO ][o.e.t.TransportService ] [g8HPieb] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
elasticsearch_1 | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
logstash_1 | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1 | log4j:WARN Please initialize the log4j system properly.
logstash_1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**logstash_1 | {:timestamp=>"2018-06-22T19:31:40.555000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}**
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
Here is the content of my docker-comp
version: "2.0"
services:
logstash:
image: logstash:2
ports:
- "5044:5044"
volumes:
- ./:/config
command: logstash -f /config/logstash.conf
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:5.6.9
ports:
- "9200:9200"
volumes:
- "./es_data/es_data:/usr/share/elasticsearch/data/"
kibana:
image: kibana:5
ports:
- "5601:5601"
links:
- elasticsearch
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
Content of my logstash.conf
input { beats { port => 5044 } }
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
stdout {
codec => rubydebug
}
}
I have curl on elasticsearch container and kibana container and it looks good to me
{
"name" : "g8HPieb",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "XxH0TcAmQcGqprf6s7TJEQ",
"version" : {
"number" : "5.6.9",
"build_hash" : "877a590",
"build_date" : "2018-04-12T16:25:14.838Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
curl localhost:9200/_cat/indices?pretty
yellow open .kibana GIBmXdlRQJmI67oq5r4oCg 1 1 1 0 3.2kb 3.2kb
After increasing virtual memory size
root#sfbp19:~/dockerizing-jenkins# sysctl -p
vm.max_map_count = 262144
root#sfbp19:~/dockerizing-jenkins# docker-compose -f docker-compose-elk.yml up
Creating network "dockerizingjenkins_default" with the default driver
Creating dockerizingjenkins_elasticsearch_1
Creating dockerizingjenkins_logstash_1
Creating dockerizingjenkins_kibana_1
Attaching to dockerizingjenkins_elasticsearch_1, dockerizingjenkins_kibana_1, dockerizingjenkins_logstash_1
elasticsearch_1 | [2018-06-26T19:08:19,294][INFO ][o.e.n.Node ] [] initializing ...
elasticsearch_1 | [2018-06-26T19:08:19,363][INFO ][o.e.e.NodeEnvironment ] [PVmTsqv] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/sfbp19--vg-root)]], net usable_space [671.9gb], net total_space [789.2gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2018-06-26T19:08:19,364][INFO ][o.e.e.NodeEnvironment ] [PVmTsqv] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] node name [PVmTsqv] derived from node ID [PVmTsqv3QnyS3sQarPcJ-A]; set [node.name] to override
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] version[5.6.9], pid[1], build[877a590/2018-04-12T16:25:14.838Z], OS[Linux/4.4.0-31-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b11]
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [aggs-matrix-stats]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [ingest-common]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-expression]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-groovy]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-mustache]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-painless]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [parent-join]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [percolator]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [reindex]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [transport-netty3]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [transport-netty4]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] no plugins loaded
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:kibana#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch#5.6.9","info"],"pid":13,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["error","elasticsearch","admin"],"pid":13,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:console#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch#5.6.9","error"],"pid":13,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:metrics#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["status","plugin:timelion#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["listening","info"],"pid":13,"message":"Server running at http://0.0.0.0:5601"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["status","ui settings","error"],"pid":13,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-26T19:08:21,190][INFO ][o.e.d.DiscoveryModule ] [PVmTsqv] using discovery type [zen]
elasticsearch_1 | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node ] initialized
elasticsearch_1 | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node ] [PVmTsqv] starting ...
elasticsearch_1 | [2018-06-26T19:08:21,780][INFO ][o.e.t.TransportService ] [PVmTsqv] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
logstash_1 | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1 | log4j:WARN Please initialize the log4j system properly.
logstash_1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
logstash_1 | {:timestamp=>"2018-06-26T19:08:23.572000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
logstash_1 | {:timestamp=>"2018-06-26T19:08:23.790000+0000", :message=>"Pipeline main started"}
elasticsearch_1 | [2018-06-26T19:08:24,837][INFO ][o.e.c.s.ClusterService ] [PVmTsqv] new_master {PVmTsqv}{PVmTsqv3QnyS3sQarPcJ-A}{coD5A4HyR7-1MedSq8dFUQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)[, ]
elasticsearch_1 | [2018-06-26T19:08:24,869][INFO ][o.e.h.n.Netty4HttpServerTransport] [PVmTsqv] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1 | [2018-06-26T19:08:24,870][INFO ][o.e.n.Node ] [PVmTsqv] started
elasticsearch_1 | [2018-06-26T19:08:24,989][INFO ][o.e.g.GatewayService ] [PVmTsqv] recovered [1] indices into cluster_state
elasticsearch_1 | [2018-06-26T19:08:25,148][INFO ][o.e.c.r.a.AllocationService] [PVmTsqv] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:26Z","tags":["status","plugin:elasticsearch#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:26Z","tags":["status","ui settings","info"],"pid":13,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Elasticsearch plugin is red"}
========================filebeat.yml==============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /jenkins/gerrit_volume/logs/*_log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#============================== Kibana =====================================
setup.kibana:
#host: "localhost:5601"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.1.9.69:5044"]
logging.level: debug
It looks like this is an elasticsearch problem from your logging, preventing ES from initializing. This line:
elasticsearch_1 | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
You can bump it up temporarily with the following command:
sysctl -w vm.max_map_count=262144
Or set it permanently by adding the following line to /etc/sysctl.conf and running sysctl -p to pick up the config if you're on a live instance:
vm.max_map_count=262144
Since you're doing this in a Docker container you probably want to just spin up with the latter option in /etc/sysctl.conf.
Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

ElasticSearch is swapping

I have launch my new site today and have problems with the SWAP of ElasticSearch. I don't know how to fix it because I have give enought heap memory for each of the 2 nodes, but somehow, there are problems.
I am attaching the previews from HQ. Can someone help me?
---- config /etc/elasticsearch.yml ---
cluster.name: xxx
node.name: xxx
node.data: true
node.master: true
bootstrap.mlockall: true
index.translog.flush_threshold_ops: 50000
index.merge.policy.merge_factor: 5
index.merge.policy.segments_per_tier: 10
# Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 1000
threadpool.bulk.queue_size: 30000
# Index pool
threadpool.index.type: fixed
threadpool.index.size: 1000
threadpool.index.queue_size: 10000
index.cache.query.enable: true
indices.cache.query.size: 15%
index.cache.field.expire: 1h
indices.fielddata.cache.size: 15%
indices.fielddata.cache.expire : 1h
indices.cache.filter.size: 15%
index.store.type: mmapfs
transport.tcp.compress: true;
network.bind_host: xxxx
network.publish_host: xxxx
network.host: xxxx
discovery.zen.ping.unicast.hosts: ["xxxx"]
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.timeout: 10s
transport.tcp.port: 9300
http.port: 9200
http.max_content_length: 500mb
index.routing.allocation.disable_allocation: false
index.search.slowlog.threshold.query.warn: 10s
index.search.slowlog.threshold.query.info: 5s
index.search.slowlog.threshold.query.debug: 2s
index.search.slowlog.threshold.query.trace: 500ms
script.engine.groovy.inline.aggs: on
script.inline: on
script.indexed: on
index.max_result_window: 40000
--- config /etc/default/elasticsearch -----
ES_HEAP_SIZE=5g
---- JAVA process -----
/usr/bin/java -Xms5g -Xmx5g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.1.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch
---- node info ----
{"cluster_name":"xxxxx_search","nodes":{"Tze_hXZ2SwqQIvg4YWOcMg":{"timestamp":1460563870399,"name":"xxxxx-slave-1","transport_address":"xxxx:9301","host":"xxxxx","ip":["xxxxx:9301","NONE"],"jvm":{"timestamp":1460563870399,"uptime_in_millis":17908517,"mem":{"heap_used_in_bytes":2679603328,"heap_used_percent":50,"heap_committed_in_bytes":5333843968,"heap_max_in_bytes":5333843968,"non_heap_used_in_bytes":105721416,"non_heap_committed_in_bytes":108711936,"pools":{"young":{"used_in_bytes":15540664,"max_in_bytes":279183360,"peak_used_in_bytes":279183360,"peak_max_in_bytes":279183360},"survivor":{"used_in_bytes":2730400,"max_in_bytes":34865152,"peak_used_in_bytes":34865152,"peak_max_in_bytes":34865152},"old":{"used_in_bytes":2661332264,"max_in_bytes":5019795456,"peak_used_in_bytes":3813217632,"peak_max_in_bytes":5019795456}}},"threads":{"count":56,"peak_count":63},"gc":{"collectors":{"young":{"collection_count":1170,"collection_time_in_millis":38743},"old":{"collection_count":4,"collection_time_in_millis":220}}},"buffer_pools":{"direct":{"count":61,"used_in_bytes":15744627,"total_capacity_in_bytes":15744627},"mapped":{"count":85,"used_in_bytes":4319218096,"total_capacity_in_bytes":4319218096}}}},"DrG535FuQKygzKlSCAWwLw":{"timestamp":1460563870399,"name":"xxxxx-master-1","transport_address":"xxxxx:9300","host":"xxxx","ip":["xxxxx:9300","NONE"],"attributes":{"master":"true"},"jvm":{"timestamp":1460563870399,"uptime_in_millis":17912689,"mem":{"heap_used_in_bytes":2315059272,"heap_used_percent":43,"heap_committed_in_bytes":5333843968,"heap_max_in_bytes":5333843968,"non_heap_used_in_bytes":118353088,"non_heap_committed_in_bytes":121683968,"pools":{"young":{"used_in_bytes":172840784,"max_in_bytes":279183360,"peak_used_in_bytes":279183360,"peak_max_in_bytes":279183360},"survivor":{"used_in_bytes":2480072,"max_in_bytes":34865152,"peak_used_in_bytes":34865152,"peak_max_in_bytes":34865152},"old":{"used_in_bytes":2139738416,"max_in_bytes":5019795456,"peak_used_in_bytes":3826731840,"peak_max_in_bytes":5019795456}}},"threads":{"count":59,"peak_count":71},"gc":{"collectors":{"young":{"collection_count":1368,"collection_time_in_millis":47571},"old":{"collection_count":5,"collection_time_in_millis":270}}},"buffer_pools":{"direct":{"count":71,"used_in_bytes":24539898,"total_capacity_in_bytes":24539898},"mapped":{"count":84,"used_in_bytes":4318926707,"total_capacity_in_bytes":4318926707}},"classes":{"current_loaded_count":9552,"total_loaded_count":9695,"total_unloaded_count":143}}}}}
---- node process ---
{"cluster_name":"xxxx_search","nodes":{"Tze_hXZ2SwqQIvg4YWOcMg":{"name":"xxxx-slave-1","transport_address":"xxxx:9301","host":"xxxx","ip":"xxxx","version":"2.3.1","build":"bd98092","http_address":"xxxx:9201","process":{"refresh_interval_in_millis":1000,"id":25686,"mlockall":false}},"DrG535FuQKygzKlSCAWwLw":{"name":"xxxx-master-1","transport_address":"xxxx:9300","host":"xxxx","ip":"xxxx","version":"2.3.1","build":"bd98092","http_address":"xxxx:9200","attributes":{"master":"true"},"process":{"refresh_interval_in_millis":1000,"id":25587,"mlockall":false}}}}
Thanks!

Elasticsearch 503 error MasterNotDiscoveredException

I am new to ES. I want to add some new nodes in elasticsearch.yml file for the same cluster, but I have this error when I try to check the cluster health
(GET _cluster/health):
"error": "MasterNotDiscoveredException[waited for [30s]]",
"status": 503
This is my config file:
cluster.name: mycluster6
node.name: "nodeA"
node.master: true
node.data: true
discovery.zen.ping.multicast.enabled: false
cluster.name: mycluster6
node.name: "nodeB"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
cluster.name: mycluster6
node.name: "nodeC"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
cluster.name: mycluster6
node.name: "nodeD"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.minimum_master_nodes: 3
in console:
java.lang.NullPointerException
at org.elasticsearch.marvel.agent.Utils.extractHostsFromHttpServer(Utils.java:90)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openAndValidateConnection(ESExporter.java:344)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openExportingConnection(ESExporter.java:212)
at org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:275)
at org.elasticsearch.marvel.agent.exporter.ESExporter.exportNodeStats(ESExporter.java:173)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportNodeStats(AgentService.java:305)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:225)
at java.lang.Thread.run(Thread.java:745)
[2015-05-05 14:00:58,053][TRACE][discovery.zen ] [nodeD] full ping responses: {none}
[2015-05-05 14:00:58,053][DEBUG][discovery.zen ] [nodeD] filtered ping responses: (filter_client[true], filter_data[false]) {none}
[2015-05-05 14:00:58,053][TRACE][discovery.zen ] [nodeD] not enough master nodes [[]]
[2015-05-05 14:00:58,053][TRACE][discovery.zen ] [nodeD] starting to ping
[2015-05-05 14:01:01,065][TRACE][discovery.zen ] [nodeD] full ping responses: {none}
[2015-05-05 14:01:01,065][DEBUG][discovery.zen ] [nodeD] filtered ping responses: (filter_client[true], filter_data[false]) {none}
[2015-05-05 14:01:01,065][TRACE][discovery.zen ] [nodeD] not enough master nodes [[]]
[2015-05-05 14:01:01,065][TRACE][discovery.zen ] [nodeD] starting to ping
[2015-05-05 14:01:04,079][TRACE][discovery.zen ] [nodeD] full ping responses: {none}
[2015-05-05 14:01:04,079][DEBUG][discovery.zen ] [nodeD] filtered ping responses: (filter_client[true], filter_data[false]) {none}
[2015-05-05 14:01:04,079][TRACE][discovery.zen ] [nodeD] not enough master nodes [[]]
[2015-05-05 14:01:04,079][TRACE][discovery.zen ] [nodeD] starting to ping
[2015-05-05 14:01:05,795][ERROR][marvel.agent ] [nodeD] exporter [es_exporter] has thrown an exception:

Resources