Integration between ELK and LDAP - elasticsearch

I recently got to manage an opensource-based infrastructure composed by multiple Debian servers. On some of them, the ELK stack is installed.
I am verifying verify the presence of any integration between ELK and LDAP or other IAMs. On the dedicated monitoring node, I looked for IAM-related info into the following configuration files:
/etc/elasticsearch/elasticsearch.yaml
/etc/kibana/kibana.yml
/etc/logstash/logstash.yml
but the only login/account credentials I have been able to find are in the kibana.yml file:
elasticsearch.username: "username"
elasticsearch.password: "password"
In /etc/kibana/kibana.yml and /etc/elasticsearch/elasticsearch.yml I find the following:
xpack.security.enabled: false
which leads me think to the presence of a "xpack" plugin in somehow related to ldap. Where should I look for LDAP integration ?

Thanks to #Wonka for suggesting the presence of ReadOnlyRest. I found a readonlyrest.yml in /etc/elasticsearch. There, the following was present:
ldaps:
- name: ldap1
host: "ourldapserver.ourdomain"
[...]
Here is where LDAP integration occured.

Related

Unable to start Elasticsearch Enterprise/App Search

I've got a self-hosted Elasticsearch + Kibana environment that I'm trying to add Elasticsearch Enterprise/App Search to.
While trying to start up Elasticsearch Enterprise/App Search I'm getting the below error:
Elasticsearch API key service must be enabled. It is enabled automatically when you configure Elasticsearch to use TLS on the HTTP interface.
Alternatively, you can explicitly enable the setting within Elasticsearch by opening config/elasticsearch.yml and adding:
xpack.security.authc.api_key.enabled: true
I have added that setting and am still getting the error upon startup.
Here are the properties I modified in the elasticsearch.yml
xpac.security.enabled true
#xpack.security.audit.logfile.events.emit_request_body: true
discovery.type: single-node
xpack.security.authc.api_key.enabled: true
xpack:
security:
authc:
realms:
native:
native1:
order: 0

Spring Cloud Config Server-GITLAB SSH Connection

After going the number of SO threads and blogs and Spring cloud config documentation still, I couldn't find on how I can connect to remote GITLAB repository as I'm getting below error while starting the spring-cloud-config server.
Caused by: com.jcraft.jsch.JSchException: Auth fail
spring:
cloud:
config:
server:
git:
uri: git#private_gitlab_repo:project
search-paths: '{application}'
skip-ssl-validation: true
strict-host-key-checking: false
known-hosts-file: C:\Users\myname\.ssh\known_hosts
spring-boot :2.1.2.RELEASE
spring-cloud.version: Greenwich.RELEASE
OS: Windows-7
With the command prompt, I could able to interact with the GITLAB repository. I do have the SSH key generated and added the public key in GITLAB settings. Also, I do not have the option to use username and password to connect to GITLAB.
Any pointers on where I'm missing the configuration or steps?
Found that this is my IntelliJ idea IDE issue and when I try running the same project in command prompt it worked without any issues.

Setting up ELK stack

I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.
Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log

Configuring elastic search not to be localhost

After installing Elasticsearch 5.6.3 and setting Nodename to the server name. I tried to browse to Elasticsearch using IP:9200 but it didn't work. If I browse to localhost:9200 it works. Where do I go to change th default behaviour of Localhost. Since I want to open this up to other external servers so the loop back address of localhost isn't any good.
After installing Kibana 5.6.3, the same is obviously true here as well. Starting the kibana server with the ip fails, but with localhost doesn't.
At this point I have no indexes, I just want to prove Elasticsearch can be reached beyond localhost.
Thanks
Bill
You can configure your IP with the "network.host" setting in 'elasticsearch.yml' and 'kibana.yml' in your config directory.
Here is some link to the Elasticsearch doc to config yours :)
Configuring Elasticsearch
Important Settings
For a quick start development configuration the following settings can be placed into 'elasticsearch.yml':
network.host e.g.
network.host: 192.168.178.49
cluster.initial_master_nodes e.g.
cluster.initial_master_nodes: ["node_1"]
You can also define a cluster name:
cluster.name: my-application
Start it with the node name (example for Windows)
C:\InstallFolder\elasticsearch-7.10.0>C:\InstallFolder\elasticsearch-7.10.0\bin\elasticsearch.bat -Enode.name=node_1
Go to your browser and open http://192.168.178.49:9200 (replace with your IP). It shows a JSON result. The localhost:9200 will no longer work.
This config should not be used for production environments. See the official docs.
In general when starting from a command prompt it shows any errors when something fails. These are very helpful.

Spring Dataflow and Yarn: How to set properties properly?

How can one change the default appdeployappmaster properties ?
When I'm trying to deploy an application through Spring DataFlow YARN. I registered my app, created a stream, and click the "deploy" button. When doing so, I get the following error :
[XNIO-2 task-2] WARN o.s.c.d.s.c.StreamDeploymentController - Exception when deploying the app StreamAppDefinition [streamName=histo, name=my-app, registeredAppName=my-app, properties={spring.cloud.stream.bindings.input.destination=log, spring.cloud.stream.bindings.input.group=histo}]: java.util.concurrent.ExecutionException: org.springframework.yarn.YarnSystemException: Invalid host name: local host is: (unknown); destination host is: "null":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost; nested exception is java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "null":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
As you can see, the deployer is unable to find the "Resource Manager" URI, Although it is well found when the Spring DataFlow Server starts.
So I only get the problem at the deployment time.
Which property should I set to fix this issue, and where would I do that ?
EDIT 1:
Following Janne Valkealahti's answer, I added the following properties in /dataflow/apps/stream/app/servers.yml, relaunched the server, and tried to re-deploy my stream.
spring:
cloud:
dataflow:
yarn:
version: 0.0.1-SNAPSHOT
deployer:
yarn:
version: 1.0.2.RELEASE
stream:
kafka:
binder:
brokers: kafka.my-domain.com:9092
zkNodes: zookeeper.my-domain.com:2181/node
# Configured for Hadoop single-node running on localhost. Replace with property values reflecting your
# actual Hadoop cluster when running in a distributed environment.
hadoop:
fsUri: hdfs://mapr.my-domain.com/referentiel/ca_category_2014/
resourceManagerHost: mapr.my-domain.com
resourceManagerPort: 8032
resourceManagerSchedulerAddress: mapr.my-domain.com:8030
session:
store-type: none
I still get the exact same message.
PS: I'm not using Ambari, I'd like to understand how it works manually first.
EDIT 2:
I solved the problem adding the -Dspring.config.location VM arg on the DataFlow Server. The given configuration is passed to the deployer, and the application is effectively deployed.
I'll write an answer for it.
You didn't tell if your installation was based on ambari or normal manual YARN install so I assume it was a latter(manual).
I think a problem is that in distribution you use the config/servers.yml has a wrong setting for resourceManagerHost as it defaults to localhost. This file is distribute only once into hdfs when streams are launched. If you have changed it after you redeploy/create stream, app in hdfs directory will not get updated. On default this file in hdfs is /dataflow/apps/stream/app/servers.yml.
This error makes sense as also dataflow yarn server controlling whole stuff also needs access to yarn resource manager to submit apps. Settings for server also comes from a same servers.yml file.
It turns out I needed to add the -Dspring.config.location JVM arg to make it work. -Dspring.config.location should point to the file containing the YARN configuration, i.e.:
spring:
cloud:
dataflow:
yarn:
version: 0.0.1-SNAPSHOT
deployer:
yarn:
version: 1.0.2.RELEASE
stream:
kafka:
binder:
brokers: kafka.my-domain.com:9092
zkNodes: zookeeper.my-domain.com:2181/node
# Configured for Hadoop single-node running on localhost. Replace with property values reflecting your
# actual Hadoop cluster when running in a distributed environment.
hadoop:
fsUri: hdfs://mapr.my-domain.com/referentiel/ca_category_2014/
resourceManagerHost: mapr.my-domain.com
resourceManagerPort: 8032
resourceManagerSchedulerAddress: mapr.my-domain.com:8030
session:
store-type: none
This configuration is then passed to the deployer app (appdeployerappmaster if I get it right).

Resources