Elasticsearch search locked with ransomeware - elasticsearch

I have not protected elasticsearch, so someone hacked it, left new index with contents where to transfer bitcoins. Data I can regenerate, the questions is how can I get rid of that. Accessing the indices directly works, but search scripts return http code 400 when I do curl requests with php. Any ideas?
edit:
I removed the folder and created index from the beginning. It did not fix anything.

You need Fresh Installation with proper security, use xPack or Communication method over private IP Rather using Public IP.

Related

Getting all mails from a shared folder in Outlook using Graph API

I am trying to get all mails from a shared folder in the following location (Folder2):
\Public folders - me#domain.com\All public folders\Folder1\Folder2
I want to do this using Blue Prism as I am a RPA developer. This means that I am going to use a HTTP request.
I have tried following URLs without much luck:
https://graph.microsoft.com/v1.0/users/{user id}/messages/
https://graph.microsoft.com/v1.0/users/{user id}/mailfolders('Inbox')/messages
Does anyone know if I am using the wrong URL, what exactly goes in place of {user id} and 'Inbox' or if it is something else completely?

Can the files on the public directory in Laravel be viewed without knowing the name of files?

I am storing a file on the public directory. If my file is a 100 character random string, could someone potentially find this file? Is there any way to protect the public directory for Laravel?
Do not attempt to do this. Security by obscurity is not a valid approach.
A few flaws to this approach:
If I knew the file existed, but not its name: I could try every combination to find it.
The web server may have an unknown flaw that allows directories to be listed?
If HTTPS is not utilized, anyone monitoring requests could see the file.
The best way would be to create a view that serves the file, stored outside the public directory(/storage), and secure the view with a single use token or login.

How to delete index in custom java connector

I have build a custom connector to get the data from a web service and then index it. The web service response returns only the data to be indexed.
I want to delete the documents from index which are not part of the web service response during the crawl but were added to the index in the last crawl.
Is there any way to achieve the above or can I flush the full index programmatically in the connector code and then add the recent content to the index.
Marged is correct. A feed (which is what the connector can send to the GSA) of type full will purge the existing feed and replace it. Otherwise, your connector is going to have to manage state and prune out documents as you decided.
Thanks Marged and Michael for the help.. I guess i have to write the custom logic in connector to delete the data from index.
What you're trying to achieve is exactly what happens when you send a "full" content feed. This is from the documentation:
When the feedtype element is set to full for a content feed, the system deletes all the prior URLs that were associated with the data source. The new feed contents completely replace the prior feed contents. If the feed contains metadata, you must also provide content for each record; a full feed cannot push metadata alone. You can delete all documents in a data source by pushing an empty full feed.
Marged is correct that v4.x is the way to go in the future, but if you've already started this with the 3.x connector framework and you're happy with it there's no need to rush to upgrade it. All the related code is open source and 3.x won't disappear any time soon, there are too many 3rd party connectors based on it.

Using NEST, Index documents in ElasticSearch which is authenticated using Jetty

I have authenticated a machine which hosts Elastic Search, using Jetty plugin. Everything works fine with respect to security. But my problem is I need to add documents / update documents in the same index which is secured using Jetty. In NEST I tried to find anything related to a method connecting uri(secured by jetty) with username and password to index my data. But no method or API helps out.
I need to know "Whether NEST supports, indexing Elastic Search secured by Jetty" and if answer is yes, then please tell how it can be done.
Thanks,
PDK
Can you try putting the username/password in the URI that you are using to connect to your Jetty secured Elasticsearch index.
http://username:password#elasticsearchhost:9200
Since you are required to pass a Uri object to the ConnectionSettings for NEST you can set it like the following:
(Updated 4/25/14 - reflect correct usage with Uri class.)
var uri = new Uri("http://username:password#elasticsearchhost");
var client = new ElasticClient(new ConnectionSettings(uri));

couchdb public interface authentication through rewrites

I have a website set on a specific domain which is completely separated from my couchdb url through rewrites and virtual hosts, and I got to a point where I need to add some user authentication using _sessions API but I'm afraid I can't do it with rewrites:
{
"from": "auth",
"to": "../../../_session"
}
gives me:
{"error":"insecure_rewrite_rule","reason":"too many ../.. segments"}
which is acceptable, but now I'm wondering how would I get the session authentication to work from my domain without exposing couchdb url, and also, the session seems to be related to the domain so if I login through couchdb.example.com it won't work when using mywebsite.com as the public interface?
Thanks
PS. I've just found this post where there's an alternative by disabling secure_rewrites on the httpd config file, which seems to work, although, I was wondering that perhaps might be not a good approach and if is there something else which is ideal for this kind of problem.
I recommend to set secure_rewrites=false and don't worry about it.
We had a great discussion about CouchDB rewrites and security in the Iris Couch forum. Also see my post later about using Audit CouchDB. These are the highlights:
The secure_rewrites option is not the ultimate source of security for your data. At best, it is one layer in a multi-layer solution
The ultimate source of security is the _security object in the database. So that is where you should focus your attention
The Audit CouchDB tool scans every detail about your couch and it will tell you if any red-flags are present. It is implemented in Javascript so if you have NodeJS, you can run it; or simply reading the source code gives you an idea of what it is looking for.
If you are using vhost, than /_session handler is available at the vhost root without any rewrite rules (by default).
See the section [httpd] of default.ini:
vhost_global_handlers = _utils, _uuids, _session, _oauth, _users

Resources