I'd like to use reactivesearch with my own plain vanilla elasticsearch cluster. While the example and documentation describe that this should be possible: ReactiveBase, see the url Param. I get connection errors and a Websocket call wss://.. which looks like ReactiveBase is trying to connect to a appbase.io hosted elastic instead. It also passes a credentials code along with the call to elastic which is not specified in my code.
Is it possible to connect to a normal elastic and where can I find the documentation on how to do this?
This is my definition of ReactiveBase:
<ReactiveBase app="documents"url="https://search-siroop-3jjelqkbvwhzqzsolxt5ujxdxm.eu-central-1.es.amazonaws.com/">
To implement this example I followed the ReactiveSearch Quickstart
Yes, it's possible to connect to a normal Elasticsearch cluster (docs) with reactivesearch. It seems you're using the correct props. Sample code:
<ReactiveBase
app="your-elasticsearch-index"
url="http://your-elasticsearch-cluster"
>
<Component1 .. />
<Component2 .. />
</ReactiveBase>
The app prop refers to the index name. Looks like you're using this with AWS. Since AWS doesn't allow you to configure ES settings, you might need to use a middleware proxy server. From the docs:
If you are using Elasticsearch on AWS, then the recommended approach
is to connect via the middleware proxy as they don’t allow setting the
Elasticsearch configurations.
The docs also explain how you can write your own proxy server.
TLDR:
Proxy server
Using the proxy server in client app with reactivesearch
The connection error related to websockets you see here isn't causing the issue. It's used for streaming which works on appbase.io. This has been fixed in the 2.2.0 release. Hope this helps :)
Related
I’m using strapi v4 along with the prometheus plugin and right now my app metrics are being exposed on http://localhost:1337/api/metrics
But I need it to be on another port like http://localhost:9090/metrics (also removing the api prefix).
So strapi and the rest of the backend would still be running on port 1337 and only the metrics on 9090
I've been through the documentation but it seems like there is no configuration for that. Can anybody help think of a way to do this?
Right now the metrics don't run on a separate server. They run on the same server as strapi.
This is to be able to use the user-permissions plugin and the API token of strapi to secure the endpoint.
In the future, I could look into making it an option to create a separate server.
I deployed a Laravel application on AWS Elasticbeanstalk.
I want to incorporate caching with Redis as my cache driver as well as Elasticsearch.
I managed to run these 2 features locally (redis on port 6379 and elasticsearch on 9200),
but now I want them to run on remote servers and I simply specify their endpoints in my .env file.
Can anyone let me know how I can obtain remote URLs for Redis and Elasticsearch?
Update:
I found out that Heruko offers the ability to create a Redis instance and thereby one can obtain a URL for Redis. I presume a similar thing is for Elasticsearch.
If this is not the right way to do so, please let me know how it works
New to Kibana & not an expert in web security. We're trying to build a small startup in which we're leveraging Kibana 5.x for our backoffice analysts for data exploration. This is a webapp and will be accessible over the internet.
Also, X-PACK security (though promising) may not be an option for us purely because of cost.
I''d like to summarize my thoughts and get them validated by professionals out here.
Firstly, I'm thinking of putting Elasticsearch behind a firewall so that only my APP server and Kibana server could access - ES is now secure.
I'm thinking of fronting Kibana using a Reverse Proxy (Apache or Nginx) and apply basic authentication. And everything will be over HTTPS.
I'll only allow GET requests to Kibana through this Reverse Proxy so that the users can read only.
Does this have any gap? Also I'm wondering if Kibana makes a direct call to Elasticsearch from it's Javascript running on the browser? If this is true then we would have another potential backdoor to get to ES. What should be done if this is true.
I read the docs, but I couldn't make it work.
I have a server that holds elasticsearch and external ones that query it. Until now I can access the elasticsearch from any ip.
Example:
the public ip:port of elasticsearchserver: 123.123.123.123:9200
I have the domains: anothersocialnetwork.com and anothersocialnetwork2.com
and I want only them and localhost to be able to query the elasticsearch server.
Thank you alot
There are multiple way to achieve this. The one i would like to advice is as follows -
Run Elasticsearch in localhost interface by network.host as localhost in elasticsearch.yml file.
Now only applications in localhost can access the application
Place a proxy like nginx or apache and this proxy would be able to access elasticsearch. Now whitelist the IP's you want to access Elasticsearch in the proxy.
Also you can take a look at Elasticsearch jetty plugin. It has some security configurations along with it. But i am not sure if its actively developed.
Also on security Elasticsearch , i would recommend to go through this blog.
I have a VPC for elasticsearch nodes in AWS behind an internal load balancer. How can I access the nodes from a Heroku Ruby application.
I can't have the ES nodes as public facing.
Should I instead use a proxy to secure Elasticsearch and reach the proxy from the ruby app with some URL key? Is there a simpler way?
You probably want to look at a proxy Authentication mechanism. I personally would recommend using something like Squid
You can read more about its Authentication support here:
http://wiki.squid-cache.org/Features/Authentication
This is a another post that talks about a similar workflow to what you have:
HTTP Spec: Proxy-Authorization and Authorization headers