Elasticsearch: security concerns - elasticsearch

We are using elasticsearch as back-end for our in-house logging and monitoring system. We have multiple sites pouring in data to one ES cluster but in different index. e.g. abc-us has data from US site, abc-india has it from India site.
Now concerns are we need some security checks before pushing in data to cluster.
data coming to index is coming from right IP address
incoming json request is of inserting new data and not delete/update
while reading we want certain IP should not be able to read data of other index.
Kindly let me know if its possible to achieve using elasticsearch.

The elasticsearch-jetty plugin brings full power of Jetty and adds several new features to elasticsearch. With this plugin elasticsearch can now handle SSL connections, support basic authentication, and log all or some incoming requests in plain text or json formats.
The idea is to add a Jetty wrapper to ElasticSearch, as a plugin.
What remains is only to restrict certain URL and some methods (eg DELETE) to some users.
You can find elasticsearch-jetty on github with detailed specification about it's usage, configuration and limitations of course.

Related

How to show through which cassandra node your request was served

Why?
For educational purposes. I think it would be really nice for my audience to actually "see" it work like that.
Setup
A dockerized Spring boot REST API (serving up customer information)
A dockerized Cassandra cluster consisting of three connected nodes, holding customer data with a replication factor of two.
Suggestions
Showing which IP address or container name served my request
Showing which IP address or container name held the data that was used to show my request.
If I were to run these nodes on three seperate physical machines, maybe which machine held my data?
Something else you have in mind that really shows the distributed capabilities of Cassandra
Can this be achieved in docker logs or something in Spring data Cassandra that I am not aware of?
I don't know about Spring Data, but in normal Java driver you can get execution information from ResultSet via getExecutionInfo, and call function getQueriedHost from it. If you're using default DCAware/TokenAware load balancing policy, then you reach at least one of the nodes that hold your data. The rest of information you can get via Metadata class from which you can get a list of token ranges owned by hosts, generate a token for your partition key, and lookup in the token ranges.
P.S. See Java driver documentation for more details.

ElasticSearch deep learning

I have Elasticsearch index which logs my scraper statistics, like response status and headers used. How to do something like machine learning to generate a guess which combination of headers would succeed the best in future scrapes. is it possible to do with plain Elasticsearch if not - what plugins would you suggest.
From what I found out ELK only provides machine learning functionalities in Kibana's X-Pack extension, e.g. anomaly detection and forecasts link. For me it's useless because my model would need advanced data filtering and I want to visualize all my predictions on a dashboard. If you want to make custom predictions then the only way is to make your own script for predictions or use some out of the box ML solution like for example Amazon Machine Learning.
You can treat Elasticsearch as an ordinary NoSQL database and periodically extract raw data from Elasticsearch using REST requests and redirect it to a created ML script or ML webservice. Then you can save predictions to Elasticsearch as a new index which can be later visualized in Kibana.
HTTP GET HTTP PUT
Elasticsearch =========> Script(Filtering and Predictions) ==========> Elasticsearch
I'm still looking for the best solution to produce predictions but for now custom script seems like the only option and I'm currently developing it.

Proper crawler architecture - how to using ElasticSearch ecosystem?

In a v1.0 of a .Net data crawler, I created a Windows service that would read URLs from a database and based on some logic would select what to crawl on a specified interval.
This was single-threaded and worked well for a low number of endpoints, but scaling is obviously an issue.
I'm trying to find out how to do this using the ElasticSearch (ELK) stack and came across HTTPBeat,
a Beat to poll HTTP endpoints in a regular interval and ship the
result to the configured output channel, e.g. Logstash and
Elasticsearch.
In looking at the documentation, you have to add URLs to the config.yaml file. Not what I'm looking for as the list of URLs could change and we may not want all URLs crawled at the same time.
Then there's RSS for Logstash, which is a command-line tool - again, not what I'm looking for.
Is there a way to make use of the Beats daemon to read from the ElasticSearch database to do work based on database values - crawls, etc?
To take this to the enterprise level, do Beats or any other component of the ElasticSearch ecosystem use message queuing or a spooler (like FileBeats does - is this built into Beats?)?

How to override elastic-search routing by a plugin?

Since 2.x, elastic search disabled routing field in the documents.
Due to this, it has become really difficult to specify routing parameter via the HTTP call, especially when the code for doing so is embedded deep in a third party library such as the elasticsearch-hadoop plugin (See CommonsHttpTransport.execute() for example.)
I see a couple of old posts that talk about overriding the "routing" paramater by a plugin (See github/elasticsearch-direct-routing-plugin and hashing-algo-for-routing post).
But I am unable to search for the setting cluster.routing.operation.hash.type anywhere in the elastic-search code.
Does someone know if the above option is supported in the latest version of ES or if latest version of elastic-search supports any way to override routing except the URL-param?
I want to completely disable routing such that whatever node receives the batch of documents, should just assimilate it there only without distributing to any other primary shards (its own replicas allowed off course). This will greatly improve our ingestion from storm to elastic-search where the number of ES-bolts is much more than the number of ES-primary shards and those bolts distribute the load equally among all primaries.
We will never search by ID.

Using elasticsearch as central data repository

We are currently using elasticsearch to index and perform searches on about 10M documents. It works fine and we are happy with its performance. My colleague who initiated the use of elasticsearch is convinced that it can be used as the central data repository and other data systems (e.g. SQL Server, Hadoop/Hive) can have data pushed to them. I didn't have any arguments against it because my knowledge of both is too limited. However, I am concerned.
I do know that data in elasticsearch is stored in a manner that is efficient for text searching. Hadoop stores data just as a file system would but in a manner that is efficient to scale/replicate blocks over over multiple data nodes. Therefore, in my mind it seems more beneficial to use Hadoop (as it is more agnostic w.r.t its view on data) as a central data repository. Then push data from Hadoop to SQL, elasticsearch, etc...
I've read a few articles on Hadoop and elasticsearch use cases and it seems conventional to use Hadoop as the central data repository. However, I can't find anything that would suggest that elasticsearch wouldn't be a decent alternative.
Please Help!
As is the case with all database deployments, it really depends on your specific application.
Elasticsearch is a great open source search engine built on top of Apache Lucene. Its features and upgrades allow it to basically function just like a schema-less JSON datastore that can be accessed using both search-specific methods and regular database CRUD-like commands.
Nevertheless all the advantages Elasticsearch that brings, there are still some main disadvantages:
Security - Elasticsearch does not provide any authentication or access control functionality. It's supported since they have introduced shield.
Transactions - There is no support for transactions or processing on data manipulation. Well now data manipulation is handled with logstash.
Durability - ES is distributed and fairly stable but backups and durability are not as high priority as in other data stores.
Maturity of tools - ES is still relatively new and has not had time to develop mature client libraries and 3rd party tools which can make development much harder. We can consider that it's quite mature now
with a variety of connectors and tools around it like kibana. But it's still not suited for large computations - Commands for searching data are not suited to "large" scans of data and advanced computation on the db side.
Data Availability - ES makes data available in "near real-time" which may require additional considerations in your application (ie: comments page where a user adds new comment, refreshing the page might not actually show the new post because the index is still updating).
If you can deal with these issues then there's certainly no reason why you can't use Elasticsearch as your primary data store. It can actually lower complexity and improve performance by not having to duplicate your data but again this depends on your specific use case.
As always, weigh the benefits, do some experimentation and see what works best for you.
DISCLAIMER: This answer was written a while ago for the Elasticsearch 1.x series. These critics still somehow stand with the 2.x series. But Elastic is working on them, as the 2.x series comes with more mature tools, APIs and plugins per example, security wise, like Shield or even transport clients like Logstash or Beats, etc.
I'd highly discourage most users from using elasticsearch as your primary datastore. It will work great until your cluster melts down due to a network partition. Even settings such as minimum_master_nodes that the ES pros always set won't save you. See this excellent analysis by Aphyr with his Call Me Maybe series:
http://aphyr.com/posts/317-call-me-maybe-elasticsearch
eliasah, is right, it depends on your use case, but if your data (and job) is important to you, stay away.
Keep your golden record of your data stored in something really focused on persisting and sync your data out to search from there. It adds extra complexity and resources, but will result in a better nights rest :)
There are plenty of ways to go about this and if elasticsearch does everything you need, you can look into Kafka for persisting all the events going into a cluster which would allow replaying if things go wrong. I like this approach as it provides an async ingestion pipeline into elasticsearch that also does the persistence.

Resources