rsyslog to fluentd via ssl - rsyslog

I have been configuring to send data from rsyslog to fluentd via ssl.
I have installed secure_forward plugin in fluentd. This config is as below.
Fluentd config:
<source>
#type secure_forward
shared_key qeu1223123ecdcdjs
self_hostname myhostname.domain.com
secure yes
ca_cert_path /etc/fluentd/appcertificate.crt
ca_private_key_path /etc/fluentd/ssl.key
ca_private_key_passphrase passphrase
bind myhostname.domain.com
port 25114
keepalive 3600
tag system
<server>
#type syslog
port 25143
bind myhostname.domain.com
tag system
format none
</server>
</source>
rsyslog config
$defaultNetstreamDriverCAFile /etc/rsyslog.d/ca_cert.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.domain.com
*.* ##myhostname.domain.com:25144
Can someone help if I am missing something. I dont know where to set shared_key in rsyslog config. Currently if I take tcpdump I see there is data coming but I don't see the same in output file.

It seems not to be available a shared_key configuration in rsyslog.
I would give a try with certificate instead of a passphrase.
$DefaultNetstreamDriverCAFile /etc/rsyslog.d/ca_cert.crt
$DefaultNetstreamDriverCertFile /etc/rsyslog.d/cert.pem
$DefaultNetstreamDriverKeyFile /etc/rsyslog.d/key.pem
I add the tcpdump command for completeness:
sudo tcpdump -i eth0 tcp port 25144 -X -s 0 -nn

Related

Nodetool command from one node to another node is not working

nodetool -h 10.16.252.129 -p 9042 -u cassandra -pw cassandra status
is giving error:
nodetool: Failed to connect to '10.16.252.129:9042' -
ConnectIOException: 'non-JRMP server at remote endpoint'.
This is in cassandra.yaml file:
rpc_address: 10.16.252.129
rpc_port: 9160
You have to use 7199 port here for nodetool command. However you need to check whether your port is open or not if not then you should open/allow this port on firewall.
You can find the JMX port configuration on cassandra-env.sh.
Then you should try to run below command:-
nodetool -h Hostname/IP -p 7199 -u username -pw password status
You can find more details about nodetool syntax and usage on below link.
http://cassandra.apache.org/doc/latest/tools/nodetool/compactionhistory.html
First of all, port 9042 is for the native binary protocol CQL client connections. Port 9160 is for legacy (deprecated) Thrift protocol client connections. Inter-node nodetool commands use the JMX (Java Management eXtensions) protocol over port 7199.
Do note that in order for remote JMX to work port 7199 will need to be open (firewall) and cassandra-env.sh has configuration lines for:
$JMX_PORT="7199"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=$HOST_IP"
You may also want to enable JMX password authentication:
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
Also, you shouldn't need to send the port or credentials. The cassandra/cassandra creds are the default for database auth, not JMX. If you enabled JMX password auth, then you'll need to send whatever username and password you defined in the password file. But otherwise, this should work (as long as both the current and target nodes have remote JMX enabled):
nodetool -h 10.16.252.148 status

Enable remote access from one custom IP to Elasticsearch cluster

I've a VPS with installed Elasticsearch. the question is how it will be possible to connect this remote machine with my home IP? I know that with simple line possible to allow all connections, but it is not secure. When I try to add my custom IP, the ES is closed localhost connection and doesn't start properly.
Thank you in any advice!
First set network.host in elasticsearch.yml to the VPS public IP address, not localhost. Next you would need to open port 9200 (or whichever you are using) to you home computers specific IP address. So assuming your VPS is Linux you would achieve this by whitelisting your IP address in Iptables and opening this port to that IP address only.
iptables -A INPUT -p tcp -s <source> --dport 9200 -j ACCEPT
As to how secure this would be. In general the recommendations I've seen floating around are mostly agreeing on the fact that it's a good idea to only allow local connections to your elasticsearch instance. If you want to try allowing remote connections for testing purposes, then as I've mentioned it is enough to bind your public IP instead of localhost in elasticsearch.yml and opening the appropriate ports.
Thank you for etarhan again. One important thing, please check your iptables (firewall) rules before production for opening port for any external IPs. If they allow any remote connection anybody can update, delete your elasticsearch clusters. I solved it by following above instruction, opened remote connection to my home IP but closed any others:
iptables -A INPUT -p tcp -s <source --dport 9200 -j ACCEPT
iptables -A INPUT -p tcp --dport 9200 -j DROP

On iMac, access elasticsearch and neo4j ports on local ip address?

ifconfig shows
inet 192.168.10.1
I can access
http://localhost/
http://127.0.0.1/
http://192.168.10.1
They are all the same.
I also can access neo4j and elasticsearch ports on the following urls
Elasticsearch
http://127.0.0.1:9200/
http://localhost:9200/
Neo4j
http://127.0.0.1:7474/browser/
http://localhost:7474/browser/
But port 9200 and 7474 are not working for 192.168.10.1
http://192.168.10.1:9200
http://192.168.10.1:7474
Something I need to do make the port 7474 (neo4j) and 9200 (elasticsearch) working for 192.168.10.1, but I don't know how.
Please advise, thanks!
I figured it out.
Neo4j
Set up neo4j on the ip (except localhost), in my case
http://192.168.10.1:7474
In the neo4j.conf file
uncomment the following line.
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
Elasticsearch
Modify elasticsearch.yml
add the following line
network.host: 0.0.0.0
Then start elasticsearch.

mosquitto (mqtt broker) is refusing connections over websockets

I've set up a mosquitto broker but it refuses to connect over websockets
here is my conf file:
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
listener 1883 0.0.0.0
listener 8008 0.0.0.0
protocol websockets
and I don't have any conf at conf.d
Using the PAHO javascript client I get a ERR_CONNECTION_REFUSED
by the way I'm using debian jessie as OS
-------------------------------------EDIT 1----------------------------------
I've lowered the iptables and it still not working.
The usual way to connect is working (with port 1883)
Here is the output when I start mosquitto
1477788244: mosquitto version 1.4.10 (build date Thu, 25 Aug 2016 10:12:09 +0100) starting
1477788244: Using default config.
1477788244: Opening ipv4 listen socket on port 1883.
1477788244: Opening ipv6 listen socket on port 1883.
The important line in the startup output is here:
1477788244: Using default config.
This says that mosquitto is using it's built in config (only listen on 1883 for native MQTT traffic) and not even reading your config file.
If you just start mosquitto with no command line options this is what is uses, it will not look for a config file in /etc/mosquitto/.
You need to explicitly tell mosquitto where it's config file with the -c option.
mosquitto -c /etc/mosquitto/mosquitto.conf
Depending on how you installed mosquitto you may need to edit the scripts that automatically start it on boot. This is probably in here: /etc/init.d/mosquitto

Kibana deployment issue on server . . . client not able to access GUI

I have configured Logstash + ES + kibana on 100.100.0.158 VM and Kibana is running under apache server. port 8080
Now what my need is . . I just have to give URL "100.100.0.158:8080/kibana" to client so client can see his data on web.
When when I put this URL on client browser I am getting this error
"can't contact elasticsearch at http://"127.0.0.1":9200 please ensure that elastic search is reachable from your system"
Do I need to configure ES with IP 100.100.0.158:9200 or 127.0.0.1:9200 is ok . . !
Help . . !
Thanks
Tushar
If your Kibana and ES are installed on the same box, you can have it auto-detect the the ES URL/IP by using this line in your Kibana's config.js file:
/** #scratch /configuration/config.js/5
* ==== elasticsearch
*
* The URL to your elasticsearch server. You almost certainly don't
* want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
* the same host. By default this will attempt to reach ES at the same host you have
* elasticsearch installed on. You probably want to set it to the FQDN of your
* elasticsearch host
*/
elasticsearch: "http://"+window.location.hostname+":9200",
This is because the interface between Kibana and ES is via JavaScript, and so using 127.0.0.1 or localhost actually points to the client machine (that the browser is running on) rather than the server.
Modify elasticsearch configuration file elasticsearch.yml
Append or modify following configurations:
# Enable or disable cross-origin resource sharing.
http.cors.enabled: true
# Which origins to allow.
http.cors.allow-origin: /https?:\/\/<*your\.kibana\.host*>(:[0-9]+)?/
It is caused by kibana page trying to load jason data from elasticsearch which will be blocked for security reason.
It is about iptables rules. Kibana uses 9292 for web port, but for elasticsearch queries uses 9200. So you must add line to iptables for these ports.
netstat -napt | grep -i LISTEN
you will see these ports: 9200 9300 9301 9302 9292
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
see detail: http://logstash.net/docs/1.3.3/tutorials/getting-started-simple

Resources