I have a VPC for elasticsearch nodes in AWS behind an internal load balancer. How can I access the nodes from a Heroku Ruby application.
I can't have the ES nodes as public facing.
Should I instead use a proxy to secure Elasticsearch and reach the proxy from the ruby app with some URL key? Is there a simpler way?
You probably want to look at a proxy Authentication mechanism. I personally would recommend using something like Squid
You can read more about its Authentication support here:
http://wiki.squid-cache.org/Features/Authentication
This is a another post that talks about a similar workflow to what you have:
HTTP Spec: Proxy-Authorization and Authorization headers
Related
We are using Cognito for authentication and authorization for our microservices deployed in ec2, We are currently using ALB in front of ec2 which is connected to Route-53 and then connected with API gateway, Indeed we knew this is not a good way of using both the service but was using it in a hurry. Now we have time to correct this.
What we want to do:
Use Cognito for authorization and authentication for our microservices deployed in ec2
Use auto sccaling in case of high traffic
Map some of the exposed API, with our custom domain url.
Any security related practices for both internal and external calls that we should take care!
I will be really gratefull for help from all the techies out there!
Thanks!
I am making requests to the Google Geocoding API within my node project. In production the project is running on Containers (AWS Elastic Container Service) which means the IP address for the service can change automatically - this means that I constantly have to update the IP whitelist in my Google API Key.
IP whitelisting is the only means by which I can secure the API Key. Furthermore, if I don't secure it then the key shortly becomes useless because of unauthorized use from another source.
Is there a practical solution to securing the connection with the Geocoding API from an application running on containers?
Thanks in advance for your help!
When you create your key for the Geocoding API you can simply add no website restrictions. The security implication is that your key is now usable from anywhere so the import of keeping it safe is a bit higher.
I could create some VM instances, add them to an instance group; also created an HTTP health check, and a backend service using gcloud command in a GCE project using these guides:
https://cloud.google.com/sdk/gcloud/reference/compute/http-health-checks/create
https://cloud.google.com/sdk/gcloud/reference/compute/backend-services/create
However, I can't find the doc to create a frontend service which is required to create a balancer, and indeed, the doc for creating balancer is also not available on Google Cloud SDK Reference.
Is it real no way to use gcloud command to create frontend service and balancer?
Found it, it's called forwarding-rules, not frontend-services, rather confusing.
And forwarding rule won't point directly to a backend-service. Forwarding rule (global) points to Target HTTP Proxy, and Target HTTP Proxy needs a URL Map.
Reference:
https://cloud.google.com/sdk/gcloud/reference/compute/forwarding-rules/create
Credit to the answer of #eSniff here:
https://stackoverflow.com/a/28533614/5581893
I'd like to use reactivesearch with my own plain vanilla elasticsearch cluster. While the example and documentation describe that this should be possible: ReactiveBase, see the url Param. I get connection errors and a Websocket call wss://.. which looks like ReactiveBase is trying to connect to a appbase.io hosted elastic instead. It also passes a credentials code along with the call to elastic which is not specified in my code.
Is it possible to connect to a normal elastic and where can I find the documentation on how to do this?
This is my definition of ReactiveBase:
<ReactiveBase app="documents"url="https://search-siroop-3jjelqkbvwhzqzsolxt5ujxdxm.eu-central-1.es.amazonaws.com/">
To implement this example I followed the ReactiveSearch Quickstart
Yes, it's possible to connect to a normal Elasticsearch cluster (docs) with reactivesearch. It seems you're using the correct props. Sample code:
<ReactiveBase
app="your-elasticsearch-index"
url="http://your-elasticsearch-cluster"
>
<Component1 .. />
<Component2 .. />
</ReactiveBase>
The app prop refers to the index name. Looks like you're using this with AWS. Since AWS doesn't allow you to configure ES settings, you might need to use a middleware proxy server. From the docs:
If you are using Elasticsearch on AWS, then the recommended approach
is to connect via the middleware proxy as they don’t allow setting the
Elasticsearch configurations.
The docs also explain how you can write your own proxy server.
TLDR:
Proxy server
Using the proxy server in client app with reactivesearch
The connection error related to websockets you see here isn't causing the issue. It's used for streaming which works on appbase.io. This has been fixed in the 2.2.0 release. Hope this helps :)
I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.