I am developing an app on java. It has mongo db at the back end which stores files(in gridFS). I use spring framework to interact with mongo db.I want to search for text present in the stored documents(pdf,doc,txt files). I know mongo db supports full text search(from 2.4).My question is
does spring framework support Full text search? or should we take the help of solr or lucene?
If both of the above is possible which is a better option?
Wat about indexing?I dont have much knowledge regarding indexing in full text search
When will 2.4 be available?
1 Spring does not support full text search within its core features, however, within the spring-data project there are two sub-projects that allow the interaction with solr and elasticsearch, both of them are full text search engines built in the top of apache lucene, for detailed information look at these links:
https://github.com/dadoonet/spring-elasticsearch
https://github.com/SpringSource/spring-data-solr
2 Depends of your needs, lucene is a low level library, while elasticsearch and solr are out of the box search engines built in the top of lucene, I think that elasticsearch provides better integration with mongodb, through the mongodb-river which support indexing of gridFS attachments. Look at these links:
http://www.elasticsearch.org/
https://github.com/richardwilly98/elasticsearch-river-mongodb/
3 You need to clarify this question.
4 I don't know when the mongodb version 2.4 will be available, but don't forget that the full text search is still an experimental feature, and also I think that this feature still does not support gridFS.
MongoDB text search will not pull text out of PDF, DOC, or, for that that matter, any files that are stored in GridFS. From the perspective of MongoDB, GridFS files are uninterpreted binary.
If you'd like to use MongoDB's new text search capabilities to search in different file types, you'll need to do the work in your application to extract text from these files and add the text into documents that you explicitly insert into MongoDB. You can use existing libraries such as Apache Tika to do the heavy lifting. Note that Tika is what Solr/Lucene uses do text extraction from rich-text document types.
As for text search indexing in MongoDB, please refer to the release notes here
Related
Is it possible to save a bunch of queries into a single JSON file to import in Kibana Console?
I know there's an option to save a single query[2] and the Kibana console is based on local storage, but I would like to load up the queries based on parameters, such that changing the params(e.g load_from=filename.json) should load up a different set of queries.
For example, when I open http://localhost:5601/app/kibana#/dev_tools/console?load_from=filename.json, it should open the Kibana console with ES queries from the file.
EDIT: As a workaround, it's possible to do this with Postman API Client or similar API clients.
Solution:
EDIT 2 on 22/02/2022: Kibana Spaces is the answer. It lets you organize dashboards and other saved objects into meaningful categories[3]. Whenever you load http://localhost:5601/ it lets you choose the space you want to work with. Having multiple browser tabs with different saved spaces should work for most cases.
[2] https://www.elastic.co/guide/en/kibana/master/save-load-delete-query.html
[3] https://www.elastic.co/guide/en/kibana/master/xpack-spaces.html
Unfortunately, that's not possible yet.
Elastic is (supposedly) working on a new Kibana feature (tabbed console panes #10095) that will provide support for better organizing the code in the Dev Tools application. The issue has been opened for a while and not much seems to be happening, so we'll see.
The release date of that feature is not known yet.
I am new to elastic search. I have read its tutorials. But need guidance on my problem:
I have a collection of pdf documents and power point files on my system. I need to build a system using elastic search where I can retrieve these files on the basis of keywords present in this file. Can someone please guide as to how can I proceed here and index my documents.Do I need to parse my pdf and convert it to JSON format using Tika or FSCrawler and then provide it to elastic search.
Thankyou.
You should setup FSCrawler, that'll do the parsing and make the files content searchable.
I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)
I have a requirement for a document management system to handle pdf,word,xls,ppt with semantic search.
I started looking into elasticsearch for the same and stumbled on Apache JacKrabbit and subsequently on OpenKM and Hippo. Even though core features like versioning exists in Jackrabbit, I need some pointers on how to go about this.
I need help navigating through the following concerns:
Should I just use elasticsearch and elasticsearch attachment plugin or use Jackrabbit with MySQL backend and use Elasticsearch to index the documents.
Or should I use OpenKM?
Any pointers would be greatly appreciated. This would finally require App integration.
Update Logically, using ElasticSearch for Search makes sense. But I figure that I cannot use that as primary datasource. What are the best options from storage(primary) Apache JackRabbit with MySQL? As all features are prebuilt in OpenKM, would this be a better option?.
What is it you want to achieve? Are you looking to manage making the documents available, is it about managing the content in documents? ES, or any search engine, is generally not a primary data source.
I can't give you any advice wrt OpenKM (neither for or against). Whether Hippo is a match depends on your case which I need to know more about.
I'm using Logstash, Elasticsearch and Kibana to process, store and visualize my logs.
My setup works fine but now I'm looking for a new tool : before ELK I was used to read my logs on Notepad++ or Glogg (I'm on Windows) and now I'm using only kibana discover tab.
Do you think I can find a native application that looks like a read-only Notepad++ that query Elasticsearch and display my logs like before ?
The three features I actually need are :
querying multiple sources logs,
for a specified date range,
and display it quickly to a concise and fast viewer.
I don't think it's very complicated to implement, so that's why i'm wondering if it already exists :)