filter syslog messages rsyslog ubuntu - filter

Im trying to filter out all syslog messages except those who are defined in my config and then send the message to an external syslog server.
I have Ubuntu 16.04 with rsyslog, and I have configured Nextcloud to log to the syslog daemon.
my message(from /var/syslog) I want to allow being sent to the external syslog server:
Jul 11 15:55:28 test-virtual-machine ownCloud[28466]: {files_antivirus} Infected file deleted. Eicar-Test-Signature File: files/eicar(3).com.ocTransferId993388412.part Acccount: admin
I have tried to modify the rsyslog.conf file (rest of the file is default):
nextcloud.* -/var/log/nextcloud.log
:msg, contains, "*Infected*" -/var/log/nextcloud3.log
nextcloud.* #remote-host:514
this is not working at all. Anyone have some inputs?
Thanks,

I have the following that is currently working...
In /etc/rsyslog.d/60-my-filter.conf
:rawmsg,contains,"TAG" -/var/log/tag.log
My guess from the above, that you need to replace ":msg" with ":rawmsg", but I am no expert. I would also try removing the spaces, or ':rawmsg,contains,"Infected" -/var/log/nextcloud3.log'
Also remove the nextcloud lines until you know you are getting the files formatted/filtered properly, and then try adding it back.
Hope this helps.

Related

mosquitto broker will not log to file

I installed an updated eclipse mosquitto broker on a Windows host for my home alarm and control MQTT network. Had a bit of trouble getting it to listen for remote clients, but got everything working with the existing clients.
The trouble is I can't get the service to log no matter what I put in mosquitto.conf. If I run it directly from a command prompt (mosquitto -v) it logs to the console, and always says 'Using default config'. Does this mean it can't locate the CONF file? I've tried several versions of mosquitto going back to my original 1.6 and they all do the same thing.
This setting:
log_dest file c:\projects#lab\mosquitto.log
is the problem. I see where it says that a Windows service defaults to 'log_dest none', but I assumed I could overwrite that.
I doubt # is valid in a path name.
Also mosquitto doesn't have a default config file name, you must pass it on the command line with -c option.
The service picks up mosquitto.conf from the install dir, but only when running as the service
Also just to be clear, -v overrides all logging options (including writing to a file). From the man page:
-v, --verbose
Use verbose logging. This is equivalent to setting log_type to all in the configuration file. This overrides and logging options
given in the configuration file.

Completely hide server name Apache Windows

Hope you guys are doing well.
I have one query so I have added the below lines in my windows Apache httpd.conf file with the below tags:-
ServerSignature Off,
ServerTokens Prod,
HostnameLookups Off,
TraceEnable off
And I am getting the below O/P like Server: Apache by using curl -I
Actually I am looking for the O/P like Server: Unknown or Server:""
Note :- Here my windows Apache version is Server version: Apache/2.4.46 (Win64)
Kindly help me here how I can hide this Server information as well, as its a security threat to our Instances.
Thanks
Apache say:
Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of "security through obscurity" is a myth and leads to a false sense of safety.
You would need to modify the source code, or install mod_security, and then you can add:
SecRuleEngine On
SecServerSignature Unknown
You can modify the source code as follows: How to change Apache's 'Server:' header without mod_security?
To remove server header by editing source: https://stackoverflow.com/a/66667833/12154890
Editing the source is probably the only way to remove the Server: header completely.
Since you are using windows, if you cannot install additional modules like mod_security or recompile, you cannot remove it.

Ubuntu-based Logstash Keystore Permissions Issues

Background: I'm working in an Ubuntu 20.04 environment setting up Logstash servers to ship metrics to my Elastic cluster. With my relatively basic configuration, I'm able to have a Filebeat process send logs to a Loadbalancer, which then spreads them across my Logstash servers and up to Elastic. This process works. I'd like to be able to use the Logstash Keystore to prevent having to pass sensitive variables to my logstash.yml file in plain text. In my environment, I'm able to follow the Elastic documentation to setup a password-protected keystore in the default location, add keys to it, and successfully list out those keys.
Problems: While the Logstash servers successfully run without the keystore, the moment I add them and try to watch the logfile on startup, the process never starts. It seems to continue attempting restart without ever logging to the logstash-plain.log. When trying to run the process in the foreground with this configuration, the error I received was the rather-unhelpful:
Found a file at /etc/logstash/logstash.keystore,
but it is not a valid Logstash keystore
Troubleshooting Done: After trying some steps found in other issues, such as replacing the /etc/sysconfig/logstash creation with simply adding the password to /etc/default/logstash, the errors were a little more helpful, stating that the file permissions or password were incorrect. The logstash-keystore process itself was capable of creating and listing keys, so the password was correct, and the keystore itself was set to 0644. I tried multiple permissions configurations and was still unable to get Logstash to run as a process or in the foreground.
I'm still under the impression it's a permissions issue, but I don't know how to resolve it. Logstash runs as the logstash user, which should be able to read the keystore file since its 0644 and housed in the same dir as logstash.yml.
Has anyone experienced something similar with Logstash & Ubuntu, or in a similar environment? If so, how did you manage to get past it? I'm open to ideas and would love to get this working.
Try running logstash-keystore as the logstash user:
sudo -u logstash /usr/share/logstash/bin/logstash-keystore \
--path.settings /etc/logstash list
[Aside from the usual caveats about secret obfuscation of this kind, it's worth making explicit that the docs expect logstash-keystore to be run as root, not as logstash. So after you're done troubleshooting, especially if you create a keystore owned by logstash, make sure it ultimately has permissions that are sufficiently restrictive]
Alternatively, you could run some other command as the logstash user. To validate the permission hypothesis, you just need to read the file as user logstash:
sudo -u logstash file /etc/logstash/logstash.keystore
sudo -u logstash md5sum /etc/logstash/logstash.keystore
su logstash -c 'cat /etc/logstash/logstash.keystore > /dev/null'
# and so on
If, as you suspect, there is a permissions problem, and the read test fails, assemble the necessary data with these commands:
ls -dla /etc/logstash/{,logstash.keystore}
groups logstash
By this point you should know:
what groups logstash is in
what groups are able to open /etc/logstash
what groups are able to read /etc/logstash/logstash.keystore
And you already said the keystore's mode is 644. In all likelihood, logstash will be a member of the logstash group only, and /etc/logstash will be world readable. So the TL;DR version of this advice might be:
# set group on the keystore to `logstash`
chgrp logstash /etc/logstash/logstash.keystore
# ensure the keystore is group readable
chmod g+r /etc/logstash/logstash.keystore
If it wasn't permissions, you could try recreating the store without a password. If it then works, you'll want to be really careful about how you handle the password environment variable, and go over the docs with a fine-tooth comb.

Install Filebeat locally or in the VM?

I recently started learning ELK and I succeed to parse my XML files locally. But now I would like to have access to my server to get access to all of my XML files (upgrade every 30 seconds)
I have the ip-address of my server and my question is: should I install Filebeat locally and configure my filebeat.yml to get access to the server or I should install the Filebeat in the server and then indicate my locally address?
Filebeat is a shipper, which collects, aggregate and forward logs to your desired output (logstash, elasticsearch etc).
It works as an agent, so you need to install it in every node from which you want to collect logs from. For instance, if you want to collect logs from your local machine then install filebeat there, if you want to collect from logstash server itself, then install filebeat there. If you want to collect log from both, then filebeat needs to be installed in both machines. and use logstash as an output,
have a look at this illustration,
But when I tried to install filebeat on my server using
curl -L -O elastic.co/downloads/beats/filebeat/filebeat-6.3.1-amd64.deb
I get this message:
Could not resolve host: www.elastic.co; Name or service not known
The OS version of the server is : Linux version 3.10.0-693.17.1.el7.x86_64

Kafka 1.0 stops with FATAL SHUTDOWN error. Logs directory failed

I have just upgraded to Kafka 1.0 and zookeeper 3.4.10.At first, it all started fine. Stand - alone producer and consumer worked as expected. After I've ran my code for about 10 minutes, Kafka fails with this error:
[2017-11-07 16:48:01,304] INFO Stopping serving logs in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
[2017-11-07 16:48:01,320] FATAL Shutdown broker because all log dirs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)
I have reinstalled and reconfigured Kafka 1.0 again, the same thing happened. If I try to restart, the same error occurs.
Deleting log files helps to start Kafka, but it fails again after the short run.
I have been running 0.10.2 version for a long while, and never encountered anything like this, it was very stable over the long periods of time.
I have tried to find a solution and followed instructions in the documentation.
This is not yet a production environment, it is fairly simple setup, one producer, one consumer reading from one topic.
I am not sure if this could have anything to do with zookeeper.
**Update: ** the issue has been posted at Apache JIRA board
The consensus so far seems to be that it is a Windows issue.
Ran into this issue as well, and only clearing the kafka-logs did not work. You'll also have to clear zookeeper.
Steps to resolve:
Make sure to stop zookeeper.
Take a look at your server.properties file and locate the logs directory under the following entry.
Example:
log.dirs=/tmp/kafka-logs/
Delete the log directory and its contents. Kafka will recreate the directory once it's started again.
Take a look at the zookeeper.properties file and locate the data directory under the following entry.
Example:
dataDir=/tmp/zookeeper
Delete the data directory and its contents. Zookeeper will recreate the directory once it's started again.
Start zookeeper.
<KAFKA_HOME>bin/zookeeper-server-start.sh -daemon <KAFKA_HOME>config/zookeeper.properties
Start the kakfa broker.
<KAFKA_HOME>bin/kafka-server-start.sh -daemon <KAFKA_HOME>config/server.properties
Verify the broker has started with no issues by looking at the logs/kafkaServer.out log file.
I've tried all the solutions like
Clearing Kafka Logs and Zookeeper Data (issue reoccurred after creating new topic)
Changing log.dirs path from forward slash "/" to backward slash "\" (like log.dirs=C:\kafka_2.12-2.1.1\data\kafka ) folder named C:\kafka_2.12-2.1.1\kafka_2.12-2.1.1datakafka was created and the issue did stop and the issue was resolved.
Finally I found this link, you'll get it if you google kafka log.dirs windows
Just clean the logs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs and restart kafka
If at all, you are trying to execute in Windows machine, try changing path in windows way for parameter log.dirs (like log.dirs=C:\some_path\some_path_kafLogs) in server.properties in /config folder.
By default, this path will be in unix way (like /unix/path/).
This worked for me in Windows machine.
So this seems to be a windows issue.
https://issues.apache.org/jira/browse/KAFKA-6188
The JIRA is resolved, and there is an unmerged patch attached to it.
https://github.com/apache/kafka/pull/6403
so your options are:
get it running on windows and build it with the patch
run it in a unix style filesystem (linux or mac)
perhaps running it on docker in windows is worth a shot
The problem is in a concurrent working with log files of kafka. The task is a delaying of external log files changing between all Kafka threads and
Topic configuration can help:
Map<String, String> config = new HashMap<>();
config.put(CLEANUP_POLICY_CONFIG, CLEANUP_POLICY_COMPACT);
config.put(FILE_DELETE_DELAY_MS_CONFIG, "3600000");
config.put(DELETE_RETENTION_MS_CONFIG, "864000000");
config.put(RETENTION_MS_CONFIG, "86400000");
What worked for me was deleting both kafka and zookeeper log directories then configuring my log directories path in both kafka and zookeeper server.properties files (can be found in kafka/conf/server.properties) from the usual slash '/' to a backslash '\'
on windows changing to path separators '' resolved the issue, each required a double backslash ' C:\\path\\logs
Simply delete all the logs from :
C:\tmp\kafka-logs
and restart zookeeper and kafka server.

Resources