Snowball Edge - aws-sdk-go package in Golang - Can't Connect to S3 - go

I'm using the aws-sdk-go package in Golang to connect to Amazon S3 to provide a cloud-based storage pool. I have this working well. I would like to be able to support bulk high-speed transfers using Snowball, so I got a Snowball Edge to test this in my lab. I have not figured out how to get this working, and the documentation for Snowball Edge doesn't seem complete. This configuration may be impacted by having ordered a Snowball Edge and not just a Snowball.
The reason that I'm finding the Edge more problematic, is that a normal Snowball requires an application called snowballAdapter to be running, which looks like it handles some port mapping issues. But, this application seems to be incompatible with the Edge device, as it reports that it doesn't work with a "Snowball Edge Manifest file".
I looked at the ports that are available on the real AWS S3 and nmap reports:
nmap -v -sT -Pn s3.us-east-1.amazonaws.com
...
Scanning s3.us-east-1.amazonaws.com (52.216.161.53) [1000 ports]
Discovered open port 443/tcp on 52.216.161.53
Discovered open port 80/tcp on 52.216.161.53
Whereas on Snowball Edge, the ports are:
nmap -v -sT -Pn 192.168.1.4
....
Scanning 192.168.1.4 [1000 ports]
Discovered open port 8080/tcp on 192.168.1.4
Discovered open port 22/tcp on 192.168.1.4
Discovered open port 9091/tcp on 192.168.1.4
Discovered open port 8443/tcp on 192.168.1.4
....
PORT STATE SERVICE
22/tcp open ssh
8080/tcp open http-proxy
8443/tcp open https-alt
9091/tcp open xmltec-xmlmail
So, it seems to me that the issue may be that I have to make the aws package use port 8443 for Snowball Edge instead of 443 for the real S3. The code for connecting to S3 is pretty straight forward:
creds := credentials.NewStaticCredentials(s3Config.S3AccessKey, s3Config.S3SecretAccessKey, s3Config.S3Token)
_, err := creds.Get()
if err != nil {
return nil, nil, err
}
if len(baseFolder) > 0 {
baseFolder = baseFolder + "/"
}
cfg := aws.NewConfig().WithRegion(s3Config.S3Region).WithCredentials(creds)
svc := s3.New(session.New(), cfg)
params := &s3.ListObjectsInput{
Bucket: aws.String(s3Config.S3BucketName),
Prefix: aws.String(baseFolder),
Delimiter: aws.String("/"),
}
resp, err := svc.ListObjects(params)
So, the question is, how do I change the code to point to the Snowball Edge? I've tried mapping to the Snowball Edge in /etc/hosts from the Amazon S3 endpoint. I understand why this didn't work after discovering that the ports are different. I've played around with adding different forms of "WithEndpoint("...host...") with no success. Or, am I on the completely wrong track, and should be able to get the snowballAdapter to work with a Snowball Edge?
By the way, all the snowballEdge commands work as expected, so the device seems to be working fine, e.g.:
./snowballEdge list-access-keys
{
"AccessKeyIds" : [ "..." ]
}
./snowballEdge get-secret-access-key --access-key-id ....
[snowballEdge]
aws_access_key_id = ...
aws_secret_access_key = ...
And, I've used the correct keys associated with the device, and it does have the S3 Service configured in it:
./snowballEdge list-services
{
"ServiceIds" : [ "s3" ]
}

Snowball Edge is a very different beast than is AWS S3. In addition to the access key and secret access key, it requires that an endpoint and additional credentials in the form of a certificate. The real AWS S3 has a valid certificate, supported by the certificate authorities, but Snowball Edge has a self-signed certificate.
The configuration can be created with this:
cfg := aws.NewConfig().WithRegion(s3Config.S3Region).WithCredentials(creds).WithEndpoint(s3Config.S3Endpoint).WithHTTPClient(httpClient)
svc = s3.New(session.New(), cfg)
The endpoint looks like this (you see Snowball Edge responding on port 8443 in the question text above):
https://192.168.1.4:8443
The certificate is required to be formatted like you'd have in a file (including all the new line characters. It looks like this (again, the string you pass in has to include the new line characters after every line):
-----BEGIN CERTIFICATE-----
MIIC7zCCAdegAwIBAgIJBJZB/gkBP0B5MA0GCSqGSIb3DQEBCwUAMBYxFDASBgNV
BAMMCzE3Mi4yMC4xLjE3NB4XDTE3MDQyODIwMTMwOFoXDTE5MDQxODIwMTMwOFow
FjEUMBIGA1UEAwwLMTcyLjIwLjEuMTcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCrLWlTfrSj9R1of5Z98EHYIEEPBgnWxnlrvA+ryAzPmiXbYomI4Tpl
PsuIA+7hGXG10H0zwlz0n22EUv4pE79toYcd3czOJUAHEuSelhtP7u91vM4GguFx
A00gosu04RFUD+BYNeaLTQfd7vdmQB3bY3KEbn7Dfrs/1MYFhKb8J77mgCuUbAPu
PNvwLoV+hBL+ndgs+bIu4MtXjUJDiigRZkacpMQaduDMqEq6seoc+JwrKNBjRBRu
3l/fcQoWf+g902oZJaXXnVGqqb7o2YAQFehUAbmCJfuKFSl5tu0B+3KvQQni7lK+
SV8WItdrPumS98BBlt6NpzgC5fTwCmapAgMBAAGjQDA+MAwGA1UdEwQFMAMBAf8w
HQYDVR0OBBYEFMAvKzKgKI+izqPX6DJjJz/0fELtMA8GA1UdEQQIMAaHBKwUAREw
DQYJKoZIhvcNAQELBQADggEBAGwyzmI+9psQu9/N/oClN7Lej7e4E8cC8vymVfPz
fdW45IMNVEYHxHbu9+JzLOtLxfuDmD6B6fEYVoPubb6tsnacuwOSMZhTvhhy9nv2
f+2Pslgj/kYTeMePbHOPTyQ4sd1BE7ALdNiL/hd08ZNhqObagixNYw9eYeHEStBy
tOADKcY9gOxek1k9t+96nATgSy0WIytwra0uEgyipKQ2gXKpgg15SI4nDxQLLEgG
lb3FtRk+PfJxQ4zbHZe/cRNflcGwVCefycLQOA2Sdr8pgHW7gvETu9i9ywF0UV6f
b9wsPcDmg3EaxBa+wrLlYSzaPhI+rZYh6bpnTn311QIFZ+s=
-----END CERTIFICATE-----
Found the Snowball Edge to be pretty fast storage using S3. We were able to achieve about 1,100 MBytes/second of copy rate.

Related

Kafka and Golang with SSL

I am trying to connect to a topic that requires SSL. Can anyone suggest an approach to achieve this?
I have a ca.crt, ca.p12 and password provided by the kafka cluster administrator.
I can connect just fine using a test utility that leverages the keytool/keystore, but these don't work for Go.
The code to connect to the cluster is as follows:
func (p *KafkaPublisher) Initialise() {
configFile := "./config/kafka.properties"
fmt.Printf("Reading config file from: %s\n", configFile)
conf := ReadConfig(configFile)
var err error
kafkaProducer, err = kafka.NewProducer(&conf)
// rest of code commented out.
}
And this works just fine for SASL using the following config file:
bootstrap.servers=markets-cb--vog--pu-taejvaba.bf2.kafka.rhcloud.com:443
security.protocol=SASL_SSL
sasl.mechanisms=PLAIN
# Service account user name
sasl.username=700e9c83-b4be-4f23-8697-b6cfa5921354
sasl.password=ab955a0d-e78d-4d96-bd8e-35de3b6b83e5
# Best practice for Kafka producer to prevent data loss
acks=all
I reviewed the following documentation to try and find the right properties:
https://docs.confluent.io/platform/current/clients/librdkafka/html/md_CONFIGURATION.html#autotoc_md91
and have been unsuccessful
bootstrap.servers=my-cluster-kafka-listener1-bootstrap-climate.violet-cluster-new-2761a99850dd8c23002367ac6ce7f9ad-0000.au-syd.containers.appdomain.cloud:443
security.protocol=SSL
ssl.key.password=ebuSuzkDbfFK
ssl.certificate.location="ca.pem"
# Best practice for Kafka producer to prevent data loss
acks=all
I have also tried putting the pem file inline as a single quote-delimited line using the property ssl.certificate.pem. This produced the the most encouraging result with an error:
Failed to create producer: ssl.certificate.pem failed: not in PEM format?: error:0909006C:PEM routines:get_name:no start line: Expecting: CERTIF
But it is a PEM file format...
Here is the certificate (Don't worry, there is no security risk with me pasting this in here as the cert is now removed from the server and I changed the url and password above.)
-----BEGIN CERTIFICATE-----
MIIFLTCCAxWgAwIBAgIUBT4au51IElFmVL4RenvrRsFpwiwwDQYJKoZIhvcNAQEN
BQAwLTETMBEGA1UECgwKaW8uc3RyaW16aTEWMBQGA1UEAwwNY2x1c3Rlci1jYSB2
MDAeFw0yMjA3MTkwNTE4NDBaFw0yMzA3MTkwNTE4NDBaMC0xEzARBgNVBAoMCmlv
LnN0cmltemkxFjAUBgNVBAMMDWNsdXN0ZXItY2EgdjAwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQDdghbY97oYE5A1GcGccMMp01RW3DSsl3tZ3U/Q2YKY
IDkqwkITevXu0WjUHh7v/659xwryNMtUjlz8JF4MZQnZwq1xEX6ldA+/+JeG2pJE
eEnFXPvP9meDfi2N5bQC/At5N5ZAca4jfrKVognzgHMj1JMwXtTLu4Jw73Za+dRg
y30I51x33zPTYgq+5QKTssOvvAQp+OGsf2ts1s3P5weOLJ3tfGUrVdhoblMB+RUl
TF7KIuknFY0+sNRUJSeuw29qUdH9KFJ3bYBMEF2afSybS4DlFSrs7Od0ZsPWnONn
ZRV/SdjpvjP13k32XqzX8O1h0oOnHOPExE7OQgjRVcluJh3aJbSEAb9DnAUCRX88
/ZA2tlDWGxJIv94CruKDaF6vMaupUUXmQG0Irk8cfGKHgwjvG2HL2U/oaJpPEtdo
Zu2TgeHxF/k9YBfzKC6ZlZrwQLN1iL+mAL45ql1OYyGVWPS6NWUSpJA9FS7qJlfr
o7hiD/wg4xubtGntBCui+jdrRoDwcVQvk5+pZYNchK26oT4qjY57YhXLWByesg+c
MUSDTgtU//DHfAq6VyJqMbnP68RwW1MRMg1lbiMUTJ8wTsKgBm6fZ1E2+Jg/wJtq
FhLTBjiB6Fa2m6aZQdqaN/RMu9mUNjE45b4GVIFayouS1ejGoNNsC2rv0n4CQhNj
+wIDAQABo0UwQzAdBgNVHQ4EFgQU2YAwObuT5NpEivgwO+j+Z/qSn1gwEgYDVR0T
AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIB
AB+18g/YjUYpLbBrQgzvHhNP0THKUzVQ5ze15JZfaQHdOf6XYhAMpnHm/mBrXM2B
5ZHwkQzCuHfZmSikeVnr/thloh1+2NHdf9s4XQtxZiouZxzNnbX3Hf/gviX/lvm2
p8twSqxsMnI00x7jPDGDmZBF5bX7Mtp3c7gD3K4goz8InHdn8j1jxhYg0fQdX8j3
ryoy+hWCkaW1PPYxGgrmdJg7kiffTBw3jx9+Md11EXb7+ryZeUEsI6MqGQSg6F94
U+nPWV/qBErEbe5iNISdOkUK8wcjk/IOVAps1CJs8BECcCaReGwVpC8twxx9c8BY
DEW76Y8J9syxgxUZeEywoqguxc80SSuTLXcNkAoifdReUeW/b08cJffP55nztfbE
Fhg4E5vYj41q2Z5iOI1sZsY22Z4VszW0Fl11DcloM4/088W6O3Lp3Jo0rMu/k14/
DNr5AM8Lrgno947S1OWZ87Q1IF8zlayM+c5XWRHO64jBHouTo1HvDodudaF+XYxv
F7xRVUehnnACQExy2OYeOkjtxmsinQDfZcvvj7b7NfCqytM3IjB1jk9GxeoFgKYj
/n9WFjHWtSnC+nsyZo2c37XeaHbBEtls3LHXb6+OmOtiTzw0C8TEQJ4AcqaPNk1k
knHg6PPWBenWskC9KH898c6vvhZ5/VHSWXJG6f8GxWya
-----END CERTIFICATE-----

telegraf output to Elasticsearch: "health check timeout: no Elasticsearch node available"

I'm having trouble connecting to an Elasticsearch instance with a Telegraf output plugin.
I created an Elasticsearch setup via the Elasticsearch service. I created a user and password (connected to a role) in Kibana for it.
Then I setup a Telegraf output for it:
[[outputs.elasticsearch]]
urls = [ "https://hostname:port" ] # required.
timeout = "5s"
enable_sniffer = false
health_check_interval = "10s"
## HTTP basic authentication details.
username = "my_username"
password = "my_password"
index_name = "device_logs" # required.
insecure_skip_verify = true
manage_template = true
template_name = "telegraf"
overwrite_template = false
But when I try to start Telegraf with this, it just gives the error,
[agent] Failed to connect to [outputs.elasticsearch], retrying in 15s, error was 'health check timeout: no Elasticsearch node available'
The connect fail seems to originate deep in the bowels of golang's net/http library, and I don't know how to get some more useful output at this point.
Things I've tried:
Thing #1: I tested cURL:
curl -u my_username:my_password -X POST "https://hostname:port/device_logs/_doc" -H 'Content-Type: application/json' -d'
{
"name": "John Doe"
}'
This works fine.
Thing #2: I created a simple Go program to connect to elasticsearch from Go:
package main
import (
"log"
"time"
"gopkg.in/olivere/elastic.v3"
)
func main() {
// configure connection to ES
client, err := elastic.NewClient(elastic.SetURL("https://hostname:port"))
if err != nil {
panic(err)
}
log.Printf("client.running? %v",client.IsRunning())
if ! client.IsRunning() {
panic("Could not make connection, not running")
}
}
.. and it hits the first panic with the same "no Elasticsearch node available".
Thing #3: I tried running gdb on that Go program to debug into it.
It jumps down to assembly as soon as I call NewClient, so I can't really learn what is happening in the bowels of net/http.
I've never used Go before, so I'm hoping to avoid hours of learning Go, spelunking, and debugging to get around what hopefully is a simple issue here.
Any ideas on how to get more info here or why this is failing? Are there build or runtime flags for Go that I can use? gdb-with-Go debugging tips so I can step down into the Go library code? Elasticsearch client know-how?
To answer my own question, the problem here turned out to be the roles permissions. The Telegraf output plugin for Elasticsearch needs both the monitor and the manage_index_templates permissions to be enabled, or else it'll fail to connect to the Elasticsearch server without printing any information about why.
BTW: to build golang code and be able to debug into the libraries it calls:
go build -gcflags=all="-N -l"

Unable to create online web-page

I am trying to create Golang web-pages...
Progress:
Ubuntu 18.04 installed both locally and on a Linode VPS.
Created and compiled a local Golang "Hello World" script that renders OK both locally and online.
Created a net/http Golang script that works OK when called locally http://localhost:8080/testing to see if it works
Uploaded the script to the Linode server and initial status messages appear but when calling http:123.456.789.32:8080/testing to see if it works the browser freezes.
//
// Golang - main.go
//
package main
import (
"net/http"
)
func sayHello(w http.ResponseWriter, r *http.Request) {
message := r.URL.Path
message = "Hello " + message
w.Write([]byte(message))
}
func main() {
http.HandleFunc("/", sayHello)
if err := http.ListenAndServe(":8080", nil); err != nil {
panic(err)
}
}
There are no errors or warnings rendered and unable to find any log references.
Can error and warnings similar to PHP error_reporting(-1), declare(strict_types=1) etc be logged or rendered?
A quick check with Nmap showed this result:
nmap -sV -p 8080 <yourIP>
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-04 07:45 CEST
Nmap scan report for <your-domain>.com (<yourIP>)
Host is up (0.032s latency).
PORT STATE SERVICE VERSION
8080/tcp filtered http-proxy
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 0.90 seconds
The state of "filtered" actually means that there was no response on that port as opposed to an outright rejection of the request.
Check the output of iptables -L -n. Presumably, you have a firewall running and blocking port 8080. Do not simply deactivate the firewall, but read up on how to open port 8080 in the firewall product you are using. Linode has guides for the commonly used/preinstalled firewalls of various Linux distributions.
If you plan to go into production, please have someone help you to ensure security and availability of your deployment.

Ruby ODBC with remote database

I am working on an application that connects to a legacy database, Eloquence, through ODBC and SQL/R. I set up my server with UnixODBC and setup the drivers and datasources as follows:
File /etc/odbcinst.ini
[SQLR]
Description=SQLR for Elqouence
Driver=/opt/sqlr/lib/libsqlrodbc.so
Driver64=/opt/sqlr/lib64/libsqlrodbc64.so
FileUsage = 1
File /etc/odbc.ini
[reservations]
Description = SQLR datasource for RES database
Driver = SQLR
Database = res
Servername = eloq-dev
Port = 8003
UserName = sqlrodbc
I confirmed that I can connect to the datasource by running isql reservations and I ran a couple of queries to make sure. No issues. Then I connected my Ruby code up to the database using the ODBC gem and the following code:
require 'rdbi-driver-odbc'
RDBI.connect :ODBC, db: "reservations"
Which outputs the following error:
Unable to connect to host.
Host 127.0.0.1, Service sqlrodbc
errno 111: Connection refused
ODBC::Error: 08001 (3047) [unixODBC][Marxmeier][SQL/R ODBC Client]connection failure
I'm concerned that it's using 127.0.0.1 as the host even though the eloq-dev hostname is set in file /etc/hosts to a different address. I'm also concerned that isql works, but the ODBC gem doesn't.
Additionally, when I use the tcpdump command, the only output related to my connection is this:
tcpdump -i lo
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
18:38:39.688264 IP localhost.50447 > localhost.mcreport: Flags [S], seq 3355035364, win 43690, options [mss 65495,sackOK,TS val 1655798115 ecr 0,nop,wscale 7], length 0
18:38:39.688280 IP localhost.mcreport > localhost.50447: Flags [R.], seq 0, ack 3355035365, win 0, length 0
No packets are going out over the network at all.
I've also changed my code to use RDBI instead of Ruby-ODBC, but I have the same issue.
My issue was ultimately twofold. I was connecting to Eloquence and SQL/R over a VPN connection which wasn't as stable as I thought and so connections were dropping as a result.
The other issue was that SQL/R uses Server instead of ServerName and Service instead of Port in the odbc.ini file.
Once I stabilized my VPN and fixed the odbc.ini file I was able to connect without issue.

Using nxlog to ship logs in to logstash from Windows using om_ssl

I have been looking at options to ship logs from Windows, I have already got logstash set up, and I currently ship logs from Linux (CentOS) servers to my ELK stack using the logstash-forwarder and ssl encryption.
For compliance reasons encryption is pretty much essential in this environment.
I was hoping to use logstash-forwarder in Windows as well, but after compiling with Go I ran in to issues shipping Event Logs, and I found some people saying that it wasn't possible because of file locking issues, which the logstash-forwarder people appear to be working on, but I can't really wait.
Anyway, eventually I found out that nxlog seems to be able to ship logs in an encrypted format using ssl, I've found a few posts about similar topics and while I've learned quite a bit about how to ship the logs across and how to set up nxlog, I am still at a loss with how to set up logstash to accept the logs so I can process them.
I've asked in the #nxlog and #logstash irc channels, and got some confirmation in #nxlog that it is possible, no further information on how it should be configured.
Anyway, I have taken the crt file created for use with my logstash-forwarder (I will create a new one if needed when I am happy that this will work) and renamed it with a pem extension, which I believe should work as it is readable in ASCII format. I have created the environment variable for %CERTDIR% and put my file in there, I have written the following config file for nxlog from the other articles I have read, I think it is right, but I am not 100% sure:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
# Enable json extension
<Extension json>
Module xm_json
</Extension>
# Nxlog internal logs
<Input internal>
Module im_internal
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
# Windows Event Log
<Input eventlog>
# Uncomment im_msvistalog for Windows Vista/2008 and later
Module im_msvistalog
# Uncomment im_mseventlog for Windows XP/2000/2003
# Module im_mseventlog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5000
CertFile %CERTDIR%/logstash-forwarder.crt
AllowUntrusted TRUE
OutputType Binary
</Output>
<Route 1>
Path eventlog, internal => sslout
</Route>
What I want to know is what input format to use in logstash I have tried shipping logs in to a lumberjack input type (using the same config as my logstash-forwarders use) with the following config:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
But when the service started I get the following in the nxlog logfiles:
2014-11-06 21:16:20 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:20 INFO nxlog-ce-2.8.1248 started
2014-11-06 21:16:21 INFO successfully connected to lumberjack.domain.com:5000
2014-11-06 21:16:22 INFO remote closed SSL socket
2014-11-06 21:16:22 INFO reconnecting in 1 seconds
2014-11-06 21:16:23 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:24 INFO reconnecting in 2 seconds
2014-11-06 21:16:24 ERROR couldn't connect to ssl socket on lumberjack.antmarketing.com:5000; No connection could be made because the target machine actively refused it.
When I turned the logging up to DEBUG I see a massive amount of logs flying through, but I think the key part is:
2014-11-06 21:20:18 ERROR Exception was caused by "rv" at om_ssl.c:532/io_err_handler(); [om_ssl.c:532/io_err_handler()] -; [om_ssl.c:501/om_ssl_connect()] couldn't connect to ssl socket on lumberjack.domain.com:5000; No connection could be made because the target machine actively refused it.
I assume this points to me using the wrong input method on logstash, but I guess it could also be an issue with my ssl certs or the way it is configured. I don't appear to be getting any logs on the logstash server being generated at the time I make the connection from my Windows machine.
Thanks to b0ti for the help, there were a number of issues, my logstash config was crashing the service, but I also had issues with my nxlog setup as well as my ssl certs being set up in the correct way.
I found this post about creating ssl certs, which covers the way they are set up really nicely for self signed certs for use as a web service.
The main thing wrong with nxlog was as b0ti pointed out I was trying to ship in binary when that will only work when shipping to nxlog server. I also noticed in the docs that the default for AllowUntrusted is false, so I just had to delete it once I was happy ssl was working.
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5001
CAFile %CERTDIR%\nxlog-ca.crt
OutputType LineBased
</Output>
Creating the CA key, and secure it as this needs to be kept secret (cd to /etc/pki/tls):
certtool --generate-privkey --bits 2048 --outfile private/nxlog-ca.key
chown logstash:logstash private/nxlog-ca.key
chmod 600 private/nxlog-ca.key
And then Self Signed CA Cert, which will need to be transferred to your clients:
certtool --generate-self-signed --load-privkey private/nxlog-ca.key --bits 2048 --template nxlog-ca-rules.cnf --outfile certs/nxlog-ca.crt
The cnf file is standard only with this option modified:
# Whether this is a CA certificate or not
ca
The logstash input method:
input {
tcp {
port => 5001
type => "nxlogs"
ssl_cacert => "/etc/pki/tls/certs/nxlog-ca.crt"
ssl_cert => "/etc/pki/tls/certs/nxlog.crt"
ssl_key => "/etc/pki/tls/private/nxlog.key"
ssl_enable => true
format => 'json'
}
}
Generate the private key:
certtool --generate-privkey --bits 2048 --outfile private/nxlog.key
chown logstash:logstash private private/nxlog.key
chmod 600 private/nxlog.key
Generate the CSR (Certificate Signing Request):
certtool --generate-request --bits 2048 --load-privkey private/nxlog.key --outfile private/nxlog.csr
Sign the Cert with the CA private key
certtool --generate-certificate --bits 2048 --load-request private/nxlog.csr --outfile certs/nxlog.crt --load-ca-certificate certs/nxlog-ca.crt --load-ca-privkey private/nxlog-ca.key --template nxlog-rules.cnf
Again the only important part over the standard inputs for the cnf file will be:
# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key
# Whether this certificate will be used for a TLS client
tls_www_client
I've tested this and it works well, I just need to get the filters set up now
The binary data format is nxlog specific, you should only use it if you send to nxlog.
OutputType Binary
If this doesn't help, check the logstash logs since it's the remote end (logstash) which closes the connection.

Resources