How to restrict index creation/deletion on Elasticsearch cluster? - elasticsearch

How to authenticate/secure index creation/deletion operations in ElasticSearch 1.0.0 cluster? also would like to know how to disable delete index operation on ElasticSearch HQ plugin? I tried following settings in elasticsearch.yml file, but still allows user to perform the operations.
action.disable_delete_all_indices: true
action.auto_create_index: false
Apprrecaite any inputs.

Write a custom ConnectionPool class and use that instead of the default connection pools that ship with the client and make auth parameters as mandatory.
Now you can authenticate user every time.
You can use Pimple, It is a simple PHP Dependency Injection Container
example:
$elasticsearch_params['connectionParams']['auth'] =
array($collection['username'],$collection['password'],'Basic')

Related

Janusgraph doesn't allow to set vertex Id even after setting this property `graph.set-vertex-id=true`

I'm running janusgraph server backed by Cassandra. It doesn't allow me to use custom vertex ids.
I do see the following log when the janusgraph gremlin server is starting.
Local setting graph.set-vertex-id=true (Type: FIXED) is overridden by globally managed value (false). Use the ManagementSystem interface instead of the local configuration to control this setting
Even tried to set this property via management API and still no luck.
gremlin> mgmt = graph.openManagement()
gremlin> mgmt.set('graph.set-vertex-id', true)
As the log message already states, this config option has the mutability FIXED which means that it is a global configuration option. Global configuration is described in this section of the JanusGraph documentation.
It states that:
Global configuration options apply to all instances in a cluster.
JanusGraph stores these configuration options in its storage backend which is Cassandra in your case. This ensures that all JanusGraph instances have the same values for these configuration values. Any changes that are made to these options in a local file are ignored because of this. Instead, you have to use the management API to change them which will update them in the storage backend.
But that is already what you tried with mgmt.set(). This doesn't work in this case however because this specific config option has the mutability level FIXED. The JanusGraph documentation describes this as:
FIXED: Like GLOBAL, but the value cannot be changed once the JanusGraph cluster is initialized.
So, this value can really not be changed in an existing JanusGraph cluster. Your only option is to start with a new cluster if you really need to change this value.
It is of course unfortunate that the error message suggested to use the management API even though it doesn't work in this case. I have created an issue with the JanusGraph project to improve this error message to avoid such confusion in the future: https://github.com/JanusGraph/janusgraph/issues/3206

Disabling/Pause database replication using ML-Gradle

I want to disable the Database Replication from the replica cluster in MarkLogic 8 using ML-Gradle. After updating the configurations, I also want to re-enable it.
There are tasks for enabling and disabling flexrep in ML Gradle. But I couldn't found any such thing for Database Replication. How can this be done?
ml-gradle uses the Management API to handle configuration changes. Database Replication is controlled by sending a PUT command to /manage/v2/databases/[id-or-name]/properties. Update your ml-config/databases/content-database.json file (example that does not include that property) to include database-replication, including replication-enabled: true.
To see what that object should look like, you can send a GET request to the properties endpoint.
You can create your own command to set replication-enabled - see https://github.com/rjrudin/ml-gradle/wiki/Writing-your-own-management-task
I'll also add a ticket for making official commands - e.g. mlEnableReplication and mlDisableReplication, with those defaulting to the content database, and allowing for any database to be specified.

Elasticsearch Disable Delete indexes

I am using Elasticsearch 1.7.1
After I create my indexes, I do not want to delete existing indexes. (Either manually or by some unintentional execution from my es.)
Is it possible to set any configurations in elasticsearch and restart the service to achieve the above?
I have tried these steps but it is not helping me out.
As for preventing index deletion via a wildcard /* or /_all, one thing i can do is to add the following settings to your config file:
action.destructive_requires_name: true
I have solve this by http CORS elasticsearch.

How do I ensure proper routing with logstash when I update a parent/child relationship document?

I have a parent/child relationship setup in elastic search. When I try to update the parent it sometimes works and sometimes not. I believe I've narrowed it down to getting a missing document error because I can't figure out how to specify routing using logstash. (Parent/Child relationships must route to the same shard). I thought elasticsearch would do this automatically given that I have setup the routing path in the mappings but it only seems to work when I specify the routing paramater in the REST API URL. But, logstash doesn't seem to have a way to add that when updating. I'm using the logstash elastic-search output plugin with http protocol.
To add to my confusion it seems elasticsearch 1.5 is deprecating the "path" property in mappings.
Is there any way to ensure proper routing with parent/child updates using logstash?

Multitenant setup with Kibana and Elasticsearch

I am going to use logstash+ES+kibana for my project. I want to know how to use this framework for multi tenants. Can any one explain me how after the authentication Kibana query the elastic search index and load in Kibana's dashboard? Can I restrict kibana to look for a specifix index of Elastic search for a particular user or some-id? Anybody has tried this?
Thnx
You could, but depending on your use case it is probably not a good idea. There are a few gotchas, particularly regarding security and separating the users. First Kibana is just javascript running in the browser. So whatever Kibana is allowed to do so is your user. You can however have a separate index pattern for each "user", but elastic search does not provide you any ways of authenticating a users or authorizing a user access to a specific index. You would have to use some sort of proxy for this.
I recommend http://www.found.no/foundation/elasticsearch-in-production/ and http://www.found.no/foundation/elasticsearch-security/ for a more in depth explanation.
Create an index for each tenant.
In this way you can use a proxy (like the app the hosts kibana) to intercept the request and return a settings that includes the index to use.
The value that specifies the index to use can be the logged in user or you can get that value somewhere else.
To separate even more the data, you can use a prefix in each index name, and then when you specify an index you can use a pattern to take all the index related to only certain kind of data/entities.
Hope this help.
Elasticsearch announced today a plugin they are working on that should provide security features to ES product. Probably, this will contain ways of restricting access based on roles and users setup at cluster and indices level. If this happens I see no way for them not to extend this security layer to Kibana, as well. Also, it seems this plugin will have a commercial version only.

Resources