Please can anyone provide me the configuration of logstash to access the log that are located on a remote system. I have tried with IP address, but it says that the plugin failed.
You are going to need to set up some sort of shipping method from your remote system to your system, i.e. logstash-forwarder. Here is a good guide for getting that set up.
Logstash-forwarder on your remote system will watch any logs that you specify in the logstash-forwarder configuration, and it will ship those logs to your system that is running Logstash server.
Related
I get:
JMX is not enabled to receive remote connections. \
Please see cassandra-env.sh for more info.
and I am not familiar with cassandra-env.sh
I tried nano /etc/cassandra/cassandra-env.sh in the terminal but from there I'm lost
By default, JMX is only enabled from local, so you can't log in remotely. To change that you need to modify the cassandra-env.sh:
https://docs.datastax.com/en/archived/ddacsecurity/doc/ddacsecurity/secureJmxAuthentication.html
Where you see:
if [ "$LOCAL_JMX" = "yes" ]; then
JVM_OPTS="$JVM_OPTS -Dcassandra.jmx.local.port=$JMX_PORT"
You'll need to change to no so that it hits the remote loop. Then you'll need to configure the following params:
-Dcassandra.jmx.remote.port=7199
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=10.101.35.37
If you want SSL then you'll need to configure that as well. JMX is just java, so it's not specific to Cassandra. The configuration is actually found in java documentation:
Remote JMX connection
You didn't provide a lot of detail in your post but I'm guessing that you saw the warning in the Cassandra system.log.
During startup, Cassandra performs certain checks to make sure that the node is configured correctly. One of the checks it performs is to verify that the JMX port is configured.
In your case, you didn't configure JMX for remote access so the WARN was logged by the StartupChecks.java class. There is no reason to be alarmed by the message. It is completely fine to not allow remote JMX access, particularly since you are most likely just trying Cassandra out for the first time.
Remote access isn't necessary for Cassandra to function. Unless you plan to run management commands like nodetool remotely, it's perfectly fine to leave it as-is. You will still be able to run admin commands locally on your Mac. Cheers!
When using logstash to retrieve airflow logs from a folder you have access to, would I still need to make any changes in the airflow.cfg file?
For instance, I have airflow and ELK deployed on same ec2 instance. The logstash .conf file has access to the airflow logs path since they are on the same instance. Do I need to turn on remote logging in airflow config?
In fact you have two options to push airflow logs to Elastic Search:
Using a log collector (logstash, fluentd, ...) to collect Airflow log then send it to Elastic Search server, in this case you don't need to change any Airflow config, you can just read the logs from the files or stdout and send it to ES.
Using Airflow remote logging feature, in this case Airflow will log directly to your remote logging server (ES in your case), and will store a local version of this log to show it when the remote server is unavailable.
So the answer to your question is no, if you have a logstash, you don't need Airflow remote logging config
I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.
I have logstash installed in a server which will process logs and publish to elastic search. But, is it possible for logstash to pull logs from remote servers (linux) without installing filebeats in those servers.
Or if filebeats can be installed in the same server as logstash and can it fetch the logs? Please help me if there is any other option as well.
Thanks in advance
Neither Logstash nor Filebeat can pull/fetch log files from remote servers, you need to have some tool installed in the remote servers that will ship the logs elsewhere.
Logstash can consume logs from message queue systems like kafka, redis or rabbitmq, for example, but you need that your remote servers send the logs to those systems anyway, so you would need a log shipper on your remote servers.
I'm developing a Web App, I need to get the logs of mesos , Normally, I can get that with
Url: http://host:5051/files/read.json?
but when the Master or Slave restart , I could not get the logs..
Please tell me how could I get that ?
All logs are stored in the log directory you specify via a --log_dir flag. I'm not sure you can access logs from previous runs via WebUI, but you can definitely ssh into the specific machine.