New Relic log when running elasticsearch daemon - elasticsearch

I installed the New Relic java agent with elasticsearch (Not the elasticsearch new-relic plugin, mind you).
when running elasticsearch with either:
sudo /usr/share/elasticsearch/elasticsearch
or
sudo service elasticsearch start
It works fine, and the data flows to my dashboard.
However, whern running as a service, the logfile in /usr/share/elasticsearch/newrelic/log is not written to, so I cannot debug what is happening to new relic.
Any idea why?

But why do you need to see the NewRelic agent's log in the first place? For what it's worth, you may want to be aware of http://sematext.com/spm/index.html

Related

Can we use Kibana as a log monitoring tool for an application running in weblogic?

My use case is: I have a java application running in a weblogic. I want to monitor this applications log in real time. The log is created using log4j. Is it possible to use Kibana or configure Kibana to monitor these logs in real time.?
yes you can, but just that Kibana needs that log data. You export/load that log data either using Filebeat or Logstash into Elasticsearch. Use Kibana to set up watchers, alerts etc to prompt you your 400s, 401s, 500s error codes etc.
Not sure if you have Elasticsearch cluster built already, but Kibana works with Elasticsearch (not directly on logfile's machine).

Elastic Uptime Monitors using Heartbeat --Few Monitors are missing in kibana

I have the elk setup in a ec2 server.With Beats like metricbeat,filebeat,heartbeat.
I have setup the elastic apm for some applications like jenkins & sonarqube.
Now In uptime I can see only few monitors like sonarqube and jenkins
Other application are missing..
When I see data from yesterday not available in elasticsearch for particular application
The best way to troubleshoot what is going on is to check if the events from Heartbeat are being collected. The Uptime application only displays events from Heartbeat, and therefore — this is the Beat that you need to check.
First, check the connectivity of Heartbeat and the configured output:
metricbeat test output
Secondly, check if the events are being generated. You can check this by commenting out your existing output (Likely Elasticsearc/Elastic Cloud) and enabling either the Console output or the File output. Then start your Metricbeat and check if events are being generated. If they are, then it might be something with the backend side of things; maybe Elasticsearch is rejecting the documents sent and refusing to index them.
Apropos, Elastic is implementing a native Jenkins plugin that allows you to observe your CI pipeline using OpenTelemetry compatible backends such as Elastic APM. You can learn more about this plugin here.

Sending log files/data from one EC2 instance to another

So i have one EC2 instance with logstash, elastichsearch and kibana installed on it. and i have another EC2 instance thats running a dummy apache server. Now i know that i should install filebeat on the apache server instance to send the log files to the logstash instance but im not sure how to configure the files.
My main goal is to send the log files from one instance basically to another for processing and viewing aka ES and Kibana. Any help or advice is greatly appreciated.
Thanks in advance!
Cheers!
So as you have already stated, the easiest way to send log events from one machine to an Elastic instance is to install the filebeat agent on the machine the apache is running.
Filebeat has its own Apache module that makes the configuration even easier! In the module you specify the paths of the desired log files.
Then you also need a configuration of Filebeat itself. In the filebeat.yml you need to define the logstash destination under
output.logstash
This configuration guide gets into more details
Take a look at the filbeat.yml reference on all configuration settings.
If you are familiar with docker, there is also a guide on how to run filebeat on docker.
Have fun! :-)

Issues in elasticsearch server running in docker

I am new to Docker. I have developed a webapp that needs elasticsearch server up and running, and the server itself listens for any user request. IN our workflow, we would like to see elasticsearch up and running, then logstash to be able to publish data (by reading the data we have with the help of logstash conf files), and finally launch the webapp we have. I am advised to use docker compose as it helps to compose multiple servers.
SO, we have three sub-directories, one for each es, logstash and webapp.
In my first step, I have in my elasticsearch dockerfile the below lines
FROM elasticsearch:latest
RUN plugin -i elasticsearch/marvel/latest
Similarly, I have Dockerfile in other sub-directories as well.
I use the command 'docker-compose up' to start the process. Once, ES is built ad logstash will be built. When logstash is being built, it tries to publish data to ES. But it finds here that ES is not up and running.I see connection refused exception. Can someone tell why this error comes? The contents in the Dockerfile is ok?
I think, I am not using docker / docker-compose the way I should use. May be, couple of pointers to learning materials will be helpful. I find plenty but could not relate to the use case I have.
There are two phases when running up. The "build" phase, which runs docker build, and the "run" phase, which runs the equivalent of docker run. During the build phase there is no support for linking to other containers, so you won't be able to connect to them.
An easy solution is to run part of the setup during the "run" phase, but the downside is that it isn't cached and has to be performed each time.
To include it as part of the build phase, you can export the json that needs to be sent to ES, include it as part of the ES build, and then have a step that does something like this:
start es
wait for es to be available
run curl to PUT/POST the data to ES
stop es

elasticsearch and logstash shutting down prematurely

I am pretty new to this and currently have a single unix (centos) server running logstash, elasticsearch and kibana. The data is being consumed from rabbitmq exchange and works pretty well but for some reason after a few hours the kibana dashboard will become inactive, the elasticsearch node inactive and logstash stops consuming. I initially set it up to manually start each process for eg. ./elasticsearch etc. and wonder if setting it up as a service would prevent this from occurring.
I want to ensure that the setup runs continuously without any interruptions.
http://192.xxx.xxx.xxx:9200/_plugin/head/
Any suggestions and links appreciated

Resources