Sending log files/data from one EC2 instance to another - elasticsearch

So i have one EC2 instance with logstash, elastichsearch and kibana installed on it. and i have another EC2 instance thats running a dummy apache server. Now i know that i should install filebeat on the apache server instance to send the log files to the logstash instance but im not sure how to configure the files.
My main goal is to send the log files from one instance basically to another for processing and viewing aka ES and Kibana. Any help or advice is greatly appreciated.
Thanks in advance!
Cheers!

So as you have already stated, the easiest way to send log events from one machine to an Elastic instance is to install the filebeat agent on the machine the apache is running.
Filebeat has its own Apache module that makes the configuration even easier! In the module you specify the paths of the desired log files.
Then you also need a configuration of Filebeat itself. In the filebeat.yml you need to define the logstash destination under
output.logstash
This configuration guide gets into more details
Take a look at the filbeat.yml reference on all configuration settings.
If you are familiar with docker, there is also a guide on how to run filebeat on docker.
Have fun! :-)

Related

How can we get nginx access log on laravel

As title, I need to get data from nginx access log to handle and store in db. So anyone have any ideas about this ? Thank you for reading this post
You should not be storing nginx logs in the DB and trying to read them through Laravel, it will very quickly cause you performance and storage issues especially on production. Other issues will be if you have various servers, how would you aggregate all the logs?
Common practice is to use NoSQL for such tasks. So you can setup another dedicated server where you export all your logs and analyze them. You use an exporter that you install on every one of your servers, point them to your log file and they export the logs to a central logs server. You can set this up yourself using something like ELK stack. With ELK stack you can use filebeat and logstash for this.
Better would be to use some of the services out there such as GCP logging, splunk, etc. You have to pay for them but they offer a lot of benefits. Splunk would provide you with an exporter, with gcp you could use fluentd. If you are using containers, you can also setup a fluentd container and shared volumes to export the logs.

How to monitor an ElasticSearch Cluster on the Elastic Cloud with Datadog?

We have an elasticsearch cluster deployed to the Elastic Cloud and would like to send monitoring/health metrics to Datadog. What is the best way to do that?
It seems like our options are:
Installing the datadog agent binary via the plugins upload
Using metric beat -> logstash -> datadog_metrics output
You can deploy the Datadog agent in a container / instance that you manage and the configure it according to these instructions to gather metrics from the remote ElasticSearch cluster that is hosted on Elastic Cloud. You need to create a conf.yaml file in the elastic.d/ directory and provide the required information (Elasticsearch endpoint/URL, username, password, port, etc) for the agent to be able to connect to the cluster. You may find a sample configuration file here.
As George Tseres mentioned above, the way I had to get this working was to set up collection on a separate instance (through docker) and then to configure it to read the specific Elastic Cloud instances.
I ended up making this: https://github.com/crwang/datadog-elasticsearch, building that docker image, and then pushing it up to AWS ECR.
Then, I spun up a Fargate service / task to run the container.
I also set it to run locally with docker-compose as a test.

Showing crashed/terminated pod logs on Kibana

I am currently working on the ELK setup for my Kubernetes clusters. I set up logging for all the pods and fortunately, it's working fine.
Now I want to push all terminated/crashed pod logs (which we get by describing but not as docker logs) as well to my Kibana instance.
I checked on my server for those logs, but they don't seem to be stored anywhere on my machine. (inside /var/log/)
maybe it's not enabled or I might not aware where to find them.
If these logs are available in a log file similar to the system log then I think it would be very easy to put them on Kibana.
It would be a great help if anyone can help me achieve this.
You need to use kube-state-metrics by which you can get all pod related metrics. You can configure to your kube-state-metrics to connect elastic search. It will create an index for a different kind of metrics. Then you can easily use that index to display your charts/graphs in Kibana UI.
https://github.com/kubernetes/kube-state-metrics

Elasticsearch Filebeat

Im new to Elasstic Search and im trying to integrate ES in our infrastructure. I installed one central ES server (6.0) with Elasticsearch, Kibana ....
The first task I wanted to do is sending apache logfiles from other servers into this ES server.
From the description of filebeat it seems this module is doing exactly the things i want (lightweight shipping of logfiles to ES server):
https://www.elastic.co/downloads/beats/filebeat
I installed filebeat from the RPM to our Server. But it seems not to run because of missing Plugins (geoIP, UA). I tried to install these but there is no executable "elasticsearch-plugin" available.
Do i have to install the whole ES package on every server I want to send logfiles to our ES Server?
Or is there another way to send logfiles to the ES Server and process fields like IP and UA on the Server side?
It's not the only approach, but this is generally the best way to get started.
You're nearly there: The elasticsearch-plugin is located in /usr/share/elasticsearch/bin/. You will need to install the GeoIP and UA plugins on every Elasticsearch node. Once that's done you should be able to use the Apache module in Filebeat.

Running netflix conductor with standalone elastic search?

How to configure Netflix conductor to run standalone elastic search rather than embedded elastic search ?
if you have a conductor-config.properties just make sure you have these pointing to the valid elasticsearch you have up and running:
workflow.elasticsearch.instanceType=EXTERNAL
workflow.elasticsearch.url=http://elasticsearch:9200
Then should be able to run conductor up with that config:
java conductor-server-2.15.0-SNAPSHOT-all.jar conductor-config.properties
https://github.com/Netflix/conductor/blob/master/es5-persistence/src/main/java/com/netflix/conductor/dao/es5/index/ElasticSearchRestDAOV5.java
You can inspect this as an example, swapping the elastic container by your own, modifying the conductor-config.properties. It will be copied in when you run:
check out https://github.com/s50600822/conductor-cheat
inside the repo just do
docker-compose up
Check out https://github.com/Netflix/conductor/blob/master/es5-persistence/src/main/java/com/netflix/conductor/dao/es5/index/ElasticSearchRestDAOV5.java for other options.
To add external elastic search we need to follow code changes as mentioned in
below link.
https://github.com/Netflix/conductor/tree/master/es5-persistence.
And rebuild jar and run conductor server again with properties.
if you still get errors , I suggest to follow below link
https://github.com/Netflix/conductor/issues/489.
You can use the standalone installation of elasticsearch2 or elasticsearch5 because the associated support classes are already provided with Netflix Conductor binary.
To configure externally you have to do the following
Install and configure standalone elasticsearch. By default the
installation would expose 2 ports 9200/http or 9300/tcp.
Update server.properties file with the host and port so that the
communication will start happening with the standalone instance of
elasticsearch.
Hope this helps.

Resources