Convert syslog-ng to rsyslog - rsyslog

How can I convert this configuration to rsyslog?
options {
long_hostnames(off);
sync(0);
perm(0640);
stats(3600);
log_msg_size(163840);
log_fifo_size(50000);
};
source s_local {
unix-dgram("/dev/log");
file("/proc/kmsg" log_prefix("kernel:"));
};
I'm migrating to rsyslog 7.4 and can't find anywhere in the official documentation how to set those settings or listening for an unix-dgram.

Related

Serializer for custom type 'janusgraph.RelationIdentifier' not found

Janus Server config
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
Java/Spring boot config
#Bean
public Cluster cluster() {
return Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1())
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
}
Getting the following error,
Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.io.IOException: Serializer for custom type 'janusgraph.RelationIdentifier' not found
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.readValue(ResponseMessageSerializer.java:59) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1.deserializeResponse(GraphBinaryMessageSerializerV1.java:180) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35) ~[gremlin-driver-3.6.1.jar:3.6.1]
Note: I didn't define the schema. Migrating the code from AWS NEPTUNE (working code) to JanusGraph.
Any help on why I am getting the above error?
Get queries are working and a few mutations queries also working,...
It looks like you have only defined the serializer for JanusGraph types on the server, but not on the client side. You also need to add the JanusGraphIoRegistry on the client side.
This can be done like this:
TypeSerializerRegistry typeSerializerRegistry = TypeSerializerRegistry.build()
.addRegistry(JanusGraphIoRegistry.instance())
.create();
Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1(typeSerializerRegistry))
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
or you can alternatively use a config file for it which simplifies the code down to:
import static org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource.traversal;
GraphTraversalSource g = traversal().withRemote("conf/remote-graph.properties");
(I have already created the GraphTraversalSource here because the client is directly created internally by withRemote().)
This is also described in the JanusGraph documentation under Connecting from Java. Note that I've linked to a version of the docs for the upcoming 1.0.0 release because the documentation for the latest released version right now still uses Gryo instead of GraphBinary. But you can still use this with JanusGraph 0.6 already and it also makes sense to use GraphBinary instead of Gryo, because support for Gryo will be dropped in version 1.0.0.
The config file conf/remote-graph.properties looks then like this (also taken from the JanusGraph documentation):
hosts: [localhost]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
You can also specify the various options that you are currently specifying via the builder. This configuration is documented in the TinkerPop reference docs.

why rsyslog is unable to parse incoming syslogs with json template that are forwarded over TCP to some port (say 10514)?

I am currently forwarding the incoming syslogs via rsyslogto local logstash port. I am currently using the below template that resides in /etc/rsyslog.d/json-template.conf
my contents of json-template.conf are as under :
template(name="json-template"
type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"#version\":\"1")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\",\"sysloghost\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
configuration for forwarding in /etc/rsyslog.conf :
*.* ##127.0.0.1:10514;json-template
rsyslog is able to send incoming syslogs to port 10514 but it is not able to parse the meaningful information from the syslogs.
NOTE: I have same setup for UDP and rsyslog is able to parse all the msgs as per json template.
I tried the same configuration of rsyslog with UDP :
configuration for forwarding in /etc/rsyslog.conf :
*.* #127.0.0.1:10514;json-template
and rsyslog is able to parse all the things from the syslog (timestamp, message, sysloghost)
All the necessary configuration for opening of tcp port for tcp forwarding and opening of udp ports for udp forwarding are taken care of as under :
for tcp:
sudo firewall-cmd --zone=public --add-port=10514/tcp
for udp:
sudo firewall-cmd --zone=public --add-port=10514/udp
But only thing I am not able to figure out is what I am missing w.r.t parse syslogs with TCP forwarding.
Expected outcome: rsyslog should be able to parse syslog as per json template
I found out the problem. the json-template sends JSON instead of RFC3164 or RFC5424 format.
so we have to add a filter in logstash configuration file to forward the JSON as it is.
My logstash configuration file looks like below :
input {
tcp {
host => "127.0.0.1"
port => 10514
type => "rsyslog"
}
}
# This is an empty filter block. You can later add other filters here to further process
# your log lines
filter {
json {
source => "message"
}
if "_jsonparsefailure" in [tags] {
drop {}
}
}
# This output block will send all events of type "rsyslog" to Elasticsearch at the configured
# host and port into daily indices of the pattern, "logstash-YYYY.MM.DD"
output {
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
}

Error while connecting Logstash and Elasticsearch

I am very very new to ELK, I installed ELK version 5.6.12 on CentOS sever. Elasticsearch and Kibana works fine. But I cannot connect Logstash to Elastic search.
I have set environment variable as
export JAVA_HOME=/usr/local/jdk1.8.0_131
export PATH=/usr/local/jdk1.8.0_131/bin:$PATH
I run simple test :
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost:9200 protocol => "http" port => "9200" } }'
I get error :
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using --
path.settings. Continuing using the defaults
Could not find log4j2 configuration at path
/etc/logstash/logstash.yml/log4j2.properties. Using default config which
logs errors to the console
Simple "slash" mentioned in official documentation of Logstash works like following :
$bin/logstash -e 'input { stdin { } } output { stdout {} }'
Hello
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using --
path.settings. Continuing using the defaults Could not find log4j2
configuration at path /usr/share/logstash/config/log4j2.properties.
Using default config which logs errors to the console
The stdin plugin is now waiting for input: {
"#version" => "1",
"host" => "localhost",
"#timestamp" => 2018-11-01T04:44:58.648Z,
"message" => "Hello" }
What could be the problem?

SonarQube - specify location of sonar.properties

I'm trying to deploy SonarQube on Kubernetes using configMaps.
The latest 7.1 image I use has a config in sonar.properties embedded in $SONARQUBE_HOME/conf/ . The directory is not empty and contain also a wrapper.conf file.
I would like to mount the configMap inside my container in a other location than /opt/sonar/conf/ and specify to sonarQube the new path to read the properties.
Is there a way to do that ? (environment variable ? JVM argument ? ...)
It is not recommended to modify this standard configuration in any way. But we can have a look at the SonarQube sourcecode. In this file you can find this code for reading the configuration file:
private static Properties loadPropertiesFile(File homeDir) {
Properties p = new Properties();
File propsFile = new File(homeDir, "conf/sonar.properties");
if (propsFile.exists()) {
...
} else {
LoggerFactory.getLogger(AppSettingsLoaderImpl.class).warn("Configuration file not found: {}", propsFile);
}
return p;
}
So the conf-path and filename is hard coded and you get a warning if the file does not exist. The home directory is found this way:
private static File detectHomeDir() {
try {
File appJar = new File(Class.forName("org.sonar.application.App").getProtectionDomain().getCodeSource().getLocation().toURI());
return appJar.getParentFile().getParentFile();
} catch (...) {
...
}
So this can also not be changed. The code above is used here:
#Override
public AppSettings load() {
Properties p = loadPropertiesFile(homeDir);
p.putAll(CommandLineParser.parseArguments(cliArguments));
p.setProperty(PATH_HOME.getKey(), homeDir.getAbsolutePath());
p = ConfigurationUtils.interpolateVariables(p, System.getenv());
....
}
This suggests that you can use commandline parameters or environment variables in order to change your settings.
For my problem, I defined environment variable to configure database settings in my Kubernetes deployment :
env:
- name: SONARQUBE_JDBC_URL
value: jdbc:sqlserver://mydb:1433;databaseName=sonarqube
- name: SONARQUBE_JDBC_USERNAME
value: sonarqube
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: dbpassword
I needed to use also ldap plugin but it was not possible to configure environment variable in this case. As /opt/sonarqube/conf/ is not empty, I can't use configMap to decouple configuration from image content. So, I build my own sonarqube image adding the ldap jar plugin and ldap setting in sonar.properties :
# General Configuration
sonar.security.realm=LDAP
ldap.url=ldap://myldap:389
ldap.bindDn=CN=mysa=_ServicesAccounts,OU=Users,OU=SVC,DC=net
ldap.bindPassword=****
# User Configuration
ldap.user.baseDn=OU=Users,OU=SVC,DC=net
ldap.user.request=(&(sAMAccountName={0})(objectclass=user))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
ldap.group.baseDn=OU=Users,OU=SVC,DC=net
ldap.group.request=(&(objectClass=group)(member={dn}))

UnresolvedAddressException in Logstash+elasticsearch

Logstash is not working in my system(Windows 7).I am using Logstash-1.4.0, kibana-3.0.0, Elasticsearch-1.3.0 version installed in my system.
I created logstash.conf file in logstash-1.4.0 (Logstash-1.4.0/logstash.conf).
input {
file {
path => “C:/apache-tomcat-7.0.62/logs/*access*”
}
}
filter {
date {
match => [ “timestamp” , “dd/MMM/yyyy:HH:mm:ss Z” ]
}
}
output {
elasticsearch { host => “localhost:9205″}
}
And I run the logstash
c:\logstash-1.4.0\bin>logstash agent -f ../logstash.conf
Getting below Exception
log4j, [2015-06-09T15:24:45.342] WARN: org.elasticsearch.transport.netty: [logstash-IT-BHARADWAJ-512441] exception caught on transport layer [[id: 0x0ee1f960]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:123)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
etc……..
How to solve this problem
You cant connect to the socket, by default elasticsearch sitting on 9200 port for http and 9300 for tcp. Try change it for 9200 first, its default case.

Resources