ssh tunnel for elasticsearch - elasticsearch

I am on a vpn which does not allow access to elasticsearch directly, so I am trying to ssh tunnel to an external box that has access.
I am tunneling with the following:
ssh -L 12345:<elastic_ip>-east-1.aws.found.io:9200
but then if I curl:
curl http://user:pass#localhost:12345
I get:
{"ok":false,"message":"Unknown cluster."}
Yet, if I try this from the box directly:
curl http://user:pass#<elastic_ip>-east-1.aws.found.io:9200
I get:
{
"status" : 200,
"name" : "instance",
"cluster_name" : “<cluster>”,
"version" : {
"number" : "1.7.2",
"build_hash" : “<build>“,
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
What am I doing wrong?

Here is how you can do it using #SSH tunneling with #Putty.
Below are the steps you need to take in order to configure SSH tunneling using Putty:
Download Putty from here and install it.
Configure Putty tunneling for Elasticsearch 9300 and 9200 ports as shown in the screenshot below:
After configuring you’ll need to open the SSH connection and make sure it is connected.
You may look at the SSH event log in order to validate your tunnel. Here is a link on how to do it.
Below is an #Elasticsearch code written in #Java that shows how to connect to the remote Elasticsearch cluster using local (9090 and 9093) ports forwarded over Putty SSH client.
public class App
{
public static void main( String[] args ) throws Exception
{
Settings settings = ImmutableSettings.settingsBuilder().
put("cluster.name", "my-cluster").build();
TransportClient client = new TransportClient(settings)
.addTransportAddress(
new netSocketTransportAddress(
"localhost", 9093));
CreateIndexResponse rs = client.admin().indices().create(
new CreateIndexRequest("tunnelingindex"))
.actionGet();
System.out.println(rs.isAcknowledged());
client.close();
}
}
The code creates an index named tunnelingindex on Elasticsearch.
Hope it helps.

This is a problem of HTTP protocol. It contains also hostnames and not only IP addresses and if you issue request on the localhost, this hostname is passed to the cluster.
There are basically two solutions, both quite hacky:
Set up your elasticsearch hostname to localhost so it will recognize your query.
Set up your /etc/hosts to direct <elastic_ip>-east-1.aws.found.io to your 127.0.0.1, connect to your ssh with direct IP and then curl to the real address.

Related

OpenSearch docker instance only allowing HTTPS connections

I'm trying to get OpenSearch configured on my local machine, and am deploying it through docker-compose using the following configuration:
opensearch:
image: opensearchproject/opensearch:1.0.0
restart: unless-stopped
ports:
- "9200:9200"
- "9300:9300"
environment:
discovery.type: single-node
The instance starts successfully, however when trying to access it through the web interface, it only accepts HTTPS connections with the default basic auth credentials (admin:admin). i.e.
https://localhost:9200 asks me to enter administrator credentials, and upon doing so, returns an expected response:
{
"name" : "a39dcf825899",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "d2ZBZDQRTyG6SvYlCmX3Iw",
"version" : {
"distribution" : "opensearch",
"number" : "1.0.0",
"build_type" : "tar",
"build_hash" : "34550c5b17124ddc59458ef774f6b43a086522e3",
"build_date" : "2021-07-02T23:22:21.383695Z",
"build_snapshot" : false,
"lucene_version" : "8.8.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
However when attempting to connect to the instance over HTTP, I get an empty response:
On chrome:
Using the OpenSearch Python client on a Django instance running in a separate Docker container (part of the same docker-compose.yml):
opensearchpy.exceptions.ConnectionError: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
For reference, the code I am using to connect the OpenSearch Python client to the OpenSearch instance is:
cls._os_client = OpenSearch(
[{"host": 'opensearch', "port": '9200'}],
use_ssl=False,
verify_certs=False,
ssl_assert_hostname=False,
ssl_show_warn=False
)
How can I configure OpenSearch to allow insecure HTTP connections?
You can disable security, just add DISABLE_SECURITY_PLUGIN=true to your env.

Is there anyway to define CIDR block as client_addr value in Consul server config?

I was getting myself familiar with Consul services and trying things out. However, until now I couldn't find a way to allow specific subnets to send requests to Consul server.
here is my basic consul config.json:
{
"server": true,
"datacenter":"dc1",
"data_dir":"/opt/consul",
"bind_addr":"{{ ansible_ssh_host }}",
"client_addr": "0.0.0.0",
"bootstrap_expect": 1,
"node_name": "consul_server",
"ui": true,
"encrypt":"",
"acl" : {
"enabled" : true,
"default_policy" : "deny",
"down_policy" : "extend-cache"
}
}
in this case, client_addr is set to anywhere 0.0.0.0. How can I set it to something like 10.10.4.0/24 10.10.2.0/24 or 10.10.0.0/16?
The client_addr config option controls which interfaces Consul will bind to for the DNS, HTTP[S], and gRPC listners. You can specify a space-separated list of addresses on the machine on which Consul should listen. E.g.,
{
"client_addr": "192.0.2.10 198.51.100.20 203.0.113.30"
}
This won't prevent Consul from being reachable from clients on other CIDRs that can route to one of the listening IPs. You'll need to use a firewall if you want to restrict which IPs can communicate with Consul.
You can, however, restrict which CIDRs Consul will accept API write requests from using the http_config.allow_write_http_from configuration option.
{
"http_config": {
"allow_write_http_from": [
"192.0.2.0/24",
"198.51.100.0/24",
"203.0.113.0/24"
]
}
}
This example config will only allow HTTP PUT/POST/DELETE options from clients residing in one of the listed address ranges.

why rsyslog is unable to parse incoming syslogs with json template that are forwarded over TCP to some port (say 10514)?

I am currently forwarding the incoming syslogs via rsyslogto local logstash port. I am currently using the below template that resides in /etc/rsyslog.d/json-template.conf
my contents of json-template.conf are as under :
template(name="json-template"
type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"#version\":\"1")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\",\"sysloghost\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
configuration for forwarding in /etc/rsyslog.conf :
*.* ##127.0.0.1:10514;json-template
rsyslog is able to send incoming syslogs to port 10514 but it is not able to parse the meaningful information from the syslogs.
NOTE: I have same setup for UDP and rsyslog is able to parse all the msgs as per json template.
I tried the same configuration of rsyslog with UDP :
configuration for forwarding in /etc/rsyslog.conf :
*.* #127.0.0.1:10514;json-template
and rsyslog is able to parse all the things from the syslog (timestamp, message, sysloghost)
All the necessary configuration for opening of tcp port for tcp forwarding and opening of udp ports for udp forwarding are taken care of as under :
for tcp:
sudo firewall-cmd --zone=public --add-port=10514/tcp
for udp:
sudo firewall-cmd --zone=public --add-port=10514/udp
But only thing I am not able to figure out is what I am missing w.r.t parse syslogs with TCP forwarding.
Expected outcome: rsyslog should be able to parse syslog as per json template
I found out the problem. the json-template sends JSON instead of RFC3164 or RFC5424 format.
so we have to add a filter in logstash configuration file to forward the JSON as it is.
My logstash configuration file looks like below :
input {
tcp {
host => "127.0.0.1"
port => 10514
type => "rsyslog"
}
}
# This is an empty filter block. You can later add other filters here to further process
# your log lines
filter {
json {
source => "message"
}
if "_jsonparsefailure" in [tags] {
drop {}
}
}
# This output block will send all events of type "rsyslog" to Elasticsearch at the configured
# host and port into daily indices of the pattern, "logstash-YYYY.MM.DD"
output {
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
}

Javascript get request from https server to localhost:port with self signed SSL

I have two servers configured and running om my Debian server. One main server and one Elasticsearch (search engine) server.
The main server is running on a https node server with a NGINX proxy and a purchased SSL certificate. The Elasticsearch server is running on a http server. I've added a new NGINX proxy server to redirect https://localhost:9999 to http://localhost:9200 with a self-signed SSL certificate. There's also a configured authentication on the Elasticsearch server with a username and a password.
Everything seem to be properly configured since I can get a successful response from the server when I'm doing a curl from the servers terminal towards https://localhost:9999 with the -k option to bypass the verication of the self-signed certificate, without it, it does not work.
I cannot do a cross-domain request from my https main server to my http localhost server. Therefore I need to configure https on my localhost server.
Without the -k option:
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
With the -k option:
{
"name" : "server-name",
"cluster_name" : "name",
"cluster_uuid" : "uuid",
"version" : {
"number" : "x.x.x",
"build_hash" : "abc123",
"build_date" : "Timestamp",
"build_snapshot" : false,
"lucene_version" : "x.x.x"
},
"tagline" : "You Know, for Search"
}
Which is a successful Elasticsearch server response.
So the full curl request looks something like curl -k https://localhost:9999/ --user username:password.
So, the actual question:
I would like to be able to do a simple jQuery AJAX request towards this server. I'm trying with the following request $.get('https://username:password#localhost:9999/') but I'm getting ERR_CONNECTION_REFUSED.
My guess is that that the AJAX request does not bypass the self-signed certificate verification and therefore it refuses to connect.
Is there any simple way to solve this with request headers or something like that? Or do i need to purchase a CA-certificate to make this work with AJAX?
You are right the problem is the self signed certificate.If you try the same request but as http it will work.
Here is a workaround to make ElasticSearch work with https:
You need to implement your own Http Connector:
var HttpConnector = require('elasticsearch/src/lib/connectors/http');
var inherits = require('util').inherits;
var qs = require('querystring');
var fs = require('fs');
function CustomHttpConnector(host, config) {
HttpConnector.call(this, host, config);
}
inherits(CustomHttpConnector, HttpConnector);
// This function is copied and modified from elasticsearch-js/src/lib/connectors/http.js
CustomHttpConnector.prototype.makeReqParams = function (params) {
params = params || {};
var host = this.host;
var reqParams = {
method: params.method || 'GET',
protocol: host.protocol + ':',
auth: host.auth,
hostname: host.host,
port: host.port,
path: (host.path || '') + (params.path || ''),
headers: host.getHeaders(params.headers),
agent: this.agent,
rejectUnauthorized: true,
ca: fs.readFileSync('publicCertificate.crt', 'utf8')
};
if (!reqParams.path) {
reqParams.path = '/';
}
var query = host.getQuery(params.query);
if (query) {
reqParams.path = reqParams.path + '?' + qs.stringify(query);
}
return reqParams;
};
module.exports = CustomHttpConnector;
Then register it like so:
var elasticsearch = require('elasticsearch');
var CustomHttpConnector = require('./customHttpConnector');
var Elasticsearch = function() {
this.client = new elasticsearch.Client({
host: {
host: 'my.server.com',
port: '443',
protocol: 'https',
auth: 'user:passwd'
},
keepAlive: true,
apiVerison: "1.3",
connectionClass: CustomHttpConnector
});
}
https://gist.github.com/fractalf/d08de3b59c32197ccd65
If you want to make simple ajax calls not using ES the only thing you can do is prompt the user to visit the page and accept the certificate themselves when the request is denied.
Also see: https://stackoverflow.com/a/4566055/5758328

Elasticsearch with Yii 2.0: Error: Elasticsearch request failed: 7 - Failed to connect to ##.##.##.### port 9200: Connection refused

I have Elasticsearch properly configured on my server. I can do everything from the command line using cURL. I can even connect to it using cURL from a PHP script outside Yii. However, I can't seem to get it to work from within Yii 2.0.
In my config, I have:
'elasticsearch' => [
'class' => 'yii\elasticsearch\Connection',
'nodes' => [
['http_address' => 'localhost:9200'],
// configure more hosts if you have a cluster
],
],
But when I try to do a simple query in Yii, I get this error. Note how it's using my server ip address rather than 'localhost' or '172.0.0.1'. Note: I've hashed out my ip address for sercurity.
Elasticsearch Database Exception – yii\elasticsearch\Exception
Elasticsearch request failed: 7 - Failed to connect to ##.##.##.### port 9200: Connection refused
Error Info: Array
(
[requestMethod] => GET
[requestUrl] => http://##.##.##.###:9200/profiles/profile/_search
[requestBody] => {"size":100,"query":{"match_all":{}}}
[responseHeaders] => Array
(
)
[responseBody] =>
)
I was able to fix this error by updating the version of Elasticsearch to something > 1.3.0 as this is the minimum requirement for YIISOFT/YII2-ELASTICSEARCH
run curl -X GET 'http://127.0.0.1:9200' to check what version you are running.
First follow this steps to download elastic search.
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz
mkdir es
tar -xf elasticsearch-1.5.2.tar.gz -C es
cd es
./bin/elasticsearch
Then you must be able to access to localhost:9200 and get something like this below :
{
"name" : "Sigyn",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.4.0",
"build_hash" : "ce9f0c7394dee074091dd1bc4e9469251181fc55",
"build_timestamp" : "2016-08-29T09:14:17Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
Then secondly,follow instruction in https://github.com/yiisoft/yii2-elasticsearch. Then you are done

Resources