This is my elasticsearch.yml:
cluster.name: cluster
node.name: esn1
path.conf: "/etc/elasticsearch"
path.data: "/var/lib/elasticsearch"
path.logs: "/var/log/elasticsearch"
network.host: 0.0.0.0
http.port: 9201
bootstrap.memory_lock: false
discovery.zen.minimum_master_nodes: 1
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false
I've also installed x-pack:
# sudo /usr/share/elasticsearch/bin/elasticsearch-plugin list
repository-s3
x-pack
Nevertheless:
curl -XPUT 'http://localhost:9200/_xpack/security/user/elastic/_password' -d '
> {
> "password": "L5ngDgtl00?"
> }
> '
No handler found for uri [/_xpack/security/user/elastic/_password] and method [PUT][
Any ideas?
You're almost there but i guess you're making some mistake in the command for curl. -u elastic option is missing.
See here: https://www.elastic.co/guide/en/x-pack/current/security-getting-started.html
Also, try to reinstall x-pack once by following step 1 in the above link.
Related
Steps followed to installed SNMP manager and agent on ec2
sudo apt-get update
sudo apt-get install snmp snmp-mibs-downloader
sudo apt-get update
sudo apt-get install snmpd
I opened sudo nano /etc/snmp/snmp.conf and commented the following line:
#mibs :
Then I went into the configuration file and modified file as below:
sudo nano /etc/snmp/snmpd.conf
Listen for connections from the local system only
agentAddress udp:127.0.0.1:161 <--- commented this part.
Listen for connections on all interfaces (both IPv4 and IPv6)
agentAddress udp:161,udp6:[::1]:161 <--remove the comment from this line to make it work.
using below command I can get snmp data
snmpwalk -v 2c -c public 127.0.0.1 .
From inside docker container as well I can get the data
snmpwalk -v 2c -c public host.docker.internal .
Docker-compose:
telegraf_snmp:
image: telegraf:1.22.1
container_name: telegraf_snmp
restart: always
depends_on:
- influxdb
networks:
- analytics
extra_hosts:
- "host.docker.internal:host-gateway"
# ports:
# - "161:161/udp"
volumes:
- /mnt/telegraf/snmp:/var/lib/telegraf
- ./etc/telegraf/snmp/:/etc/telegraf/snmp/
env_file:
- secrets.env
environment:
INFLUXDB_URL: http://influxdb:8086
command:
--config-directory /etc/telegraf/snmp/telegraf.d
--config /etc/telegraf/snmp/telegraf.conf
links:
- influxdb
logging:
options:
max-size: "10m"
max-file: "3"
Telegraf Input conf:
[[inputs.snmp]]
## Agent addresses to retrieve values from.
## format: agents = ["<scheme://><hostname>:<port>"]
## scheme: optional, either udp, udp4, udp6, tcp, tcp4, tcp6.
## default is udp
## port: optional
## example: agents = ["udp://127.0.0.1:161"]
## agents = ["tcp://127.0.0.1:161"]
## agents = ["udp4://v4only-snmp-agent"]
# agents = ["udp://127.0.0.1:161"]
agents = ["udp://host.docker.internal:161"]
## Timeout for each request.
timeout = "15s"
## SNMP version; can be 1, 2, or 3.
version = 2
## SNMP community string.
community = "public"
## Agent host tag
# agent_host_tag = "agent_host"
## Number of retries to attempt.
retries = 3
## The GETBULK max-repetitions parameter.
# max_repetitions = 10
## SNMPv3 authentication and encryption options.
##
## Security Name.
# sec_name = "myuser"
## Authentication protocol; one of "MD5", "SHA", or "".
# auth_protocol = "MD5"
## Authentication password.
# auth_password = "pass"
## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
# sec_level = "authNoPriv"
## Context Name.
# context_name = ""
## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C", or "".
### Protocols "AES192", "AES192", "AES256", and "AES256C" require the underlying net-snmp tools
### to be compiled with --enable-blumenthal-aes (http://www.net-snmp.org/docs/INSTALL.html)
# priv_protocol = ""
## Privacy password used for encrypted messages.
# priv_password = ""
## Add fields and tables defining the variables you wish to collect. This
## example collects the system uptime and interface variables. Reference the
## full plugin documentation for configuration details.
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysUpTime.0"
name = "uptime"
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysName.0"
name = "source"
is_tag = true
[[inputs.snmp.table]]
oid = "IF-MIB::ifTable"
name = "interface"
inherit_tags = ["source"]
[[inputs.snmp.table.field]]
oid = "IF-MIB::ifDescr"
name = "ifDescr"
is_tag = true
Telegraf logs:
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:09Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:09Z I! Loaded inputs: snmp
2022-09-09T10:10:09Z I! Loaded aggregators:
2022-09-09T10:10:09Z I! Loaded processors:
2022-09-09T10:10:09Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:09Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:09Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:09Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:11Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:11Z I! Loaded inputs: snmp
2022-09-09T10:10:11Z I! Loaded aggregators:
2022-09-09T10:10:11Z I! Loaded processors:
2022-09-09T10:10:11Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:11Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:11Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
But in telegraf I get above error.
I checked the mibs directory using ls /usr/share/snmp/mibs
I cannot find IF-MIB file here even after installing
$ sudo apt-get install snmp-mibs-downloader
$ sudo download-mibs
How can I resolve this issue ? Do I need to follow some additional steps ?
SNMP Plugin in telegraf should able to pull the data from SNMP
I was trying to setup an elasticsearch cluster in AKS using helm chart but due to the log4j vulnerability, I wanted to set it up with option -Dlog4j2.formatMsgNoLookups set to true. I am getting unknown flag error when I pass the arguments in helm commands.
Ref: https://artifacthub.io/packages/helm/elastic/elasticsearch/6.8.16
helm upgrade elasticsearch elasticsearch --set imageTag=6.8.16 esJavaOpts "-Dlog4j2.formatMsgNoLookups=true"
Error: unknown shorthand flag: 'D' in -Dlog4j2.formatMsgNoLookups=true
I have also tried to add below in values.yaml file
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
log4j2.properties: |
-Dlog4j2.formatMsgNoLookups = true
but the values are not adding to the /usr/share/elasticsearch/config/jvm.options, /usr/share/elasticsearch/config/log4j2.properties or in the environment variables.
First of all, here's a good source of knowledge about mitigating Log4j2 security issue if this is the reason you reached here.
Here's how you can write your values.yaml for the Elasticsearch chart:
esConfig:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
A ConfigMap will be generated by Helm:
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-master-config
...
data:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
And the Log4j configuration will be mount to your Elasticsearch as:
...
volumeMounts:
...
- name: esconfig
mountPath: /usr/share/elasticsearch/config/log4j2.properties
subPath: log4j2.properties
Update: How to set and add multiple configuration files.
You can setup other ES configuration files in your values.yaml, all the files that you specified here will be part of the ConfigMap, each of the files will be mounted at /usr/share/elasticsearch/config/ in the Elasticsearch container. Example:
esConfig:
elasticsearch.yml: |
node.master: true
node.data: true
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
jvm.options: |
# You can also place a comment here.
-Xmx1g -Xms1g -Dlog4j2.formatMsgNoLookups=true
roles.yml: |
click_admins:
run_as: [ 'clicks_watcher_1' ]
cluster: [ 'monitor' ]
indices:
- names: [ 'events-*' ]
privileges: [ 'read' ]
field_security:
grant: ['category', '#timestamp', 'message' ]
query: '{"match": {"category": "click"}}'
ALL of the configurations above are for illustration only to demonstrate how to add multiple configuration files in the values.yaml. Please substitute these configurations with your own settings.
if you update and put a value under esConfig, you will need to remove the curly brackets
esConfig:
log4j2.properties: |
key = value
I would rather suggest to change the /config/jvm.options file and at the end add
-Dlog4j2.formatMsgNoLookups=true
The helm chart has an option to set java options.
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
In your case, setting it like this should be the solution:
esJavaOpts: "-Dlog4j2.formatMsgNoLookups=true"
As I see in updated in elastic repository values.yml:
esConfig: {}
log4j2.properties: |
key = value
Probably need to uncomment log4j2.properties part.
trying to set Elasticsearch to bind to another address than local, I'm having lot of troubles..
Elasticsearch-oss 7.7 Opendistro.
elasticsearch.yml:
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
cannot set to any syntax... tryed an array
network.host: [ "127.0.0.1", "100.0.0.1" ]
...and tryed different variations, like special values, etc.
network.host: 0.0.0.0
also not working...
network:
host: _global_
also not working...
(using global address for testing)
network:
host: _local_
working
network:
host: _local_ , _interface-name_
...not working.
Finally I found a way to bind to another address. And I can get a request externally...but now the localhost is failing!
network.host: localhost
http.host: 100.0.0.1
From the same server:
curl -XGET https://localhost:9200 -u admin:admin --insecure
curl: (7) Failed to connect to localhost port 9200: Connection refused
From the client:
curl -XGET https://100.0.0.1:9200 -u admin:admin --insecure
{
"name" : "somename",
"cluster_name" : "someclustername",
"cluster_uuid" : "someclusteruuid",
"version" : {
"number" : "7.7.0",
"build_flavor" : "oss",
"build_type" : "deb",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
waiting your approach to this problem..
Thanks
[edit]
Now I found a certificate error log....I don't know if it is related.
Using default security settings for Opendistro plugin
at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
at sun.security.ssl.TransportContext.fatal(TransportContext.java:311) ~[?:?]
at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:291) ~[?:?]
at sun.security.ssl.TransportContext.dispatch(TransportContext.java:184) ~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:167) ~[?:?]
at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:729) ~[?:?]
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:684) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:499) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:475) ~[?:?]
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:634) ~[?:?]
Here the full elasticsearch.yml
The security cert options are default by Opendistro
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: somename
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: localhost
http.host: 100.0.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
######## End OpenDistro for Elasticsearch Security Demo Configuration ########
What does "client" mean in this context?
A client-node that is shipping logs to the server-node. In this case for testing purposses.
I will configure the certs properly and the discovery.type to see if that can be the fix
Thanks
As a further question of this question I want to know how I can reach my external service (elasticsearch) from inside a kubernetes pod (fluentd) if the external service is not reachable via internet but only from the host-network where also my kubernetes is hosted.
Here is the external service kubernetes object I applied:
kind: Service
apiVersion: v1
metadata:
name: ext-elastic
namespace: kube-system
spec:
type: ExternalName
externalName: 192.168.57.105
ports:
- port: 9200
So now I have this service:
ubuntu#controller:~$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ext-elastic ExternalName <none> 192.168.57.105 9200/TCP 2s
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d
The elasticsearch is there:
ubuntu#controller:~$ curl 192.168.57.105:9200
{
"name" : "6_6nPVn",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "ZmxYHz5KRV26QV85jUhkiA",
"version" : {
"number" : "6.2.3",
"build_hash" : "c59ff00",
"build_date" : "2018-03-13T10:06:29.741383Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
But from my fluentd-pod I can neither resolve the service-name in an nslookup nor ping the simple IP. Those commands are both not working:
ubuntu#controller:~$ kubectl exec fluentd-f5dks -n kube-system ping 192.168.57.105
ubuntu#controller:~$ kubectl exec fluentd-f5dks -n kube-system nslookup ext-elastic
Here is the description about my network-topology:
The VM where my elasticsearch is on has 192.168.57.105 and the VM where my kubernetes controller is on has 192.168.57.102. As shown above, the connection works well.
The controller-node has also the IP 192.168.56.102. This is the network in which he is together with the other worker-nodes (also VMs) of my kubernetes-cluster.
My fluentd-pod is seeing himself as 172.17.0.2. It can easily reach the 192.168.56.102 but not the 192.168.57.102 although it is it's host and also one and the same node.
Edit
The routing table of the fluentd-pod looks like this:
ubuntu#controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.244.0.1 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 * 255.255.255.0 U 0 0 0 eth0
The /etc/resolc.conf of the fluentd-pod looks like this:
ubuntu#controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- cat /etc/resolv.conf
nameserver 10.96.0.10
search kube-system.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
The routing table of the VM that is hosting the kubernetes controller and can reach the desired elasticsearch service looks like this:
ubuntu#controller:~$ route
Kernel-IP-Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
10.0.2.0 * 255.255.255.0 U 0 0 0 enp0s3
10.244.0.0 * 255.255.255.0 U 0 0 0 kube-bridge
10.244.1.0 192.168.56.103 255.255.255.0 UG 0 0 0 enp0s8
10.244.2.0 192.168.56.104 255.255.255.0 UG 0 0 0 enp0s8
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.56.0 * 255.255.255.0 U 0 0 0 enp0s8
192.168.57.0 * 255.255.255.0 U 0 0 0 enp0s9
Basically, your pod should have a route to the endpoint IP or the default route to the router which can redirect this traffic to the destination.
The destination endpoint should also have the route(or default route) to the source of the traffic to be able to send a reply.
Check out this article for details about routing in AWS cloud as an example.
In a general sense, a route table tells network packets which way they
need to go to get to their destination. Route tables are managed by
routers, which act as “intersections” within the network — they
connect multiple routes together and contain helpful information for
getting traffic to its final destination. Each AWS VPC has a VPC
router. The primary function of this VPC router is to take all of the
route tables defined within that VPC, and then direct the traffic flow
within that VPC, as well as to subnets outside of the VPC, based on
the rules defined within those tables.
Route tables consist of a list of destination subnets, as well as
where the “next hop” is to get to the final destination.
We make a poc with ElasticSearch but while doing it, we lost data in clustered enviroment. We use ES 2.4.0.
Can anyone say what we are missing?
Our scenario is:
Open Elastic Server-1 and Server-2 with the configurations below,
they are in a cluster.
Index document over Server-1:
curl -XPUT '20.20.20.5:9200/ert/post/1' -d '
{
"user": "easlan",
"postDate": "01-16-2015",
"body": "Adding Data in ElasticSearch Cluster" ,
"title": "ElasticSearch Cluster Test - 1"
}'
Look for indexed docs over Server-1 or Server-2:Total number of results is 1(as expected):
curl -XGET '20.20.20.5:9200/ert/post/_search?q=user:easlan&pretty=true'
curl -XGET '20.20.20.6:9200/ert/post/_search?q=user:easlan&pretty=true'
Then close Server-1
Index new document over Server-2:
curl -XPUT '20.20.20.6:9200/ert/post/2' -d '
{
"user": "easlan",
"postDate": "01-16-2015",
"body": "Adding Data in ElasticSearch Cluster" ,
"title": "ElasticSearch Cluster Test - 2"
}'
Look for indexed docs over Server-2:Total number of results is 2:
curl -XGET '20.20.20.6:9200/ert/post/_search?q=user:easlan&pretty=true'
Close Server-2
Open Server-1
Look for indexed docs over Server-1:Total number of results is 1 (as expected, because server-2 is closed):
curl -XGET '20.20.20.5:9200/ert/post/_search?q=user:easlan&pretty=true'
Then open Server-2 again. Look for indexed docs over Server-1 or Server-2. We expect to see total number of results as 2 but when we look, we got 1 as a result. Even we restart two of them again the result is still 1:
curl -XGET '20.20.20.5:9200/ert/post/_search?q=user:easlan&pretty=true'
curl -XGET '20.20.20.6:9200/ert/post/_search?q=user:easlan&pretty=true'
Our Configurations:
*** Server-1 ****
cluster.name: ESCluster
node.master: true
node.name: "es1"
node.data: true
network.bind_host: ["127.0.0.1","20.20.20.5"]
network.publish_host: "20.20.20.5"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["20.20.20.5","20.20.20.6"]
discovery.zen.minimum_master_nodes: 1
*** Server-2 ****
cluster.name: ESCluster
node.master: true
node.name: "es2"
node.data: true
network.bind_host: ["127.0.0.1","20.20.20.6"]
network.publish_host: "20.20.20.6"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["20.20.20.5","20.20.20.6"]
discovery.zen.minimum_master_nodes: 1