I am trying to perform indexing and searching on all product of microsoft office. i had find out that it is not working on excel binary book(.xlsb).
I had perform indexing successfully but it is not able to find words from it.
I had tried following steps:
curl -X PUT "localhost:9200/test/attachment/_mapping" -d '{
"attachment" : {
"properties" : {
"file" : {
"type" : "attachment",
"fields" : {
"title" : { "store" : "yes" },
"file" : { "term_vector":"with_positions_offsets", "store":"yes" }
}
}
}
}
}'
coded=`cat test.xlsb | perl -MMIME::Base64 -ne 'print encode_base64($_)'`
json="{\"file\":\"${coded}\"}"
echo "$json" > json.file
curl -X POST "localhost:9200/test/attachment/" -d #json.file
curl "localhost:9200/_search?pretty=true" -d '{
"fields" : ["title"],
"query" : {
"query_string" : {
"query" : "sheet"
}
},
"highlight" : {
"fields" : {
"file" : {}
}
}
}'
We just added streaming/read-only xlsb support in POI (coming in 3.15-beta3). Once that is released, we'll upgrade Apache Tika (1.15?), and then once Elastic upgrades, you should be good to go.
A mere 4 years later!
Related
I have just one index in elasticsearch, with name aa-bb-YYYY-MM.
Documents in this index contain a field i want to use as date field.
Those documents have been inserted from a custom script (not using logstash).
When creating the index pattern in kibana:
If i enter aa-bb-*, the date field is not found.
If i enter aa-*, the date field is not found.
If i enter aa*, the date field is found, and i can create the index pattern.
But i really need to group indexes by the first two "dimensions".I tried using "_" instead "-", with the same result.
Any idea of what is going on?
Its working for me. I'm on the latest build on the 5.0 release branch (just past the beta1 release). I don't know what version you're on.
I created this index and added 2 docs;
curl --basic -XPUT 'http://elastic:changeme#localhost:9200/aa-bb-2016-09' -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"test" : {
"properties" : {
"date" : { "type" : "date"},
"action" : {
"type" : "text",
"analyzer" : "standard",
"fields": {
"raw" : { "type" : "text", "index" : "not_analyzed" }
}
},
"myid" : { "type" : "integer"}
}
}
}
}'
curl -XPUT 'http://elastic:changeme#localhost:9200/aa-bb-2016-09/test/1' -d '{
"date" : "2015-08-23T00:01:00",
"action" : "start",
"myid" : 1
}'
curl -XPUT 'http://elastic:changeme#localhost:9200/aa-bb-2016-09/test/2' -d '{
"date" : "2015-08-23T14:02:30",
"action" : "stop",
"myid" : 1
}'
and I was able to create the index pattern with aa-bb-*
I dont know why, using the URI Search way to search a document is returning the right document, but the document is not found if I use the API DSL.
To reproduce the issue:
Without any index created, I insert this document:
curl http://localhost:9299/integrationtest-index/searchable/ID_XXXX2 -d '{ "ref" : "XXXX2", "field1" : "value1" }'
So the index is created automatically with the default mapping (type searchable):
curl http://localhost:9299/integrationtest-index?pretty
{
"integrationtest-index" : {
"aliases" : { },
"mappings" : {
"searchable" : {
"properties" : {
"field1" : {
"type" : "string"
},
"ref" : {
"type" : "string"
}
}
}
},
"settings" : {
"index" : {
"field1" : "value1",
"ref" : "XXXX2",
"number_of_shards" : "5",
"creation_date" : "1466780216631",
"number_of_replicas" : "1",
"uuid" : "GBj2VF-wQy6JP74AqoIn5g",
"version" : {
"created" : "2020099"
}
}
},
"warmers" : { }
}
}
This query return one document:
curl http://localhost:9299/integrationtest-index/searchable/_search?q=ref:XXXX2
But this other query response that does not exist:
curl -XPOST http://localhost:9299/integrationtest-index/searchable/_search/exists -d '
{
"query": {
"term" : {
"ref" : "XXXX2"
}
}
}'
Why the last query said that the document does not exist?
Environment:
ElasticSearch 2.2.0
Ubuntu 16.04 LTS
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
I have the same problem every few months, so I decided response myself and share my stupids errors.
By default, elasticsearch use index:analyzed, so the query with term does not found any document.
If you use the URI Search way, elasticsearch is executing a query_string and not a term query.
This query is working:
curl -XPOST http://localhost:9299/integrationtest-index/searchable/_search/exists -d '
{
"query": {
"match" : {
"ref" : "XXXX2"
}
}
}'
More information in the documentation, in the section Why doesn’t the term query match my document?
I defined matadata by the mapping of the Elasticsearch image Plugin.
Mapping:
"photo" : {
"mappings" : {
"scenery" : {
"properties" : {
"my_img" : {
"type" : "image",
"feature" : {"FCTH" : { }, ... },
"metadata" : {
"jpeg.image_height" : {"type" : "string","store" : true},
"jpeg.image_width" : {"type" : "string","store" : true}
}
}
}
}
}
}
After an index, although searched, metadata does not return.
How do I get a metadata?
I tried:
curl -XPOST 'localhost:9200/photo/scenery/_search' -d '{
"query":{
"image":{
"my_img":{
"feature":"CEDD",
"index":"photo",
"type":"scenery",
"id":"0",
"path":"my_img",
"hash":"BIT_SAMPLING"
}
}
}
}'
Result:
{"took":14,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":5,"max_score":1.0,"hits":[{"_index":"photo","_type":"scenery","_id":"0","_score":1.0, "_source" : {"file_name": "376423.jpg", "my_img": "/9j/4AAQSkZJRgABAQ...
Perhaps, the original data (base64 encoded image) will be returned _source field. You can use that instead, the fields option.
Try this query.
curl -XPOST 'localhost:9200/photo/scenery/_search' -d '{
"query":{
...
},
"fields": ["my_img.metadata.jpeg.image_height","my_img.metadata.jpeg.image_width" ]
}'
When I put the following things into elasticsearch-1.0.1 I expect the search queries to return the posts with id 200 and 201. But I get nothing returned. Hits:0.
What am I doing wrong? I'm searching exactly for what I put in, but get nothing out... (here's the test code for download: http://petoria.de/tmp/arabictest.sh).
But please keep in mind: I want to use the Arabic analyzer, because I want to develop my own analyzer later.
Best,
Koem
curl -XPOST localhost:9200/posts -d '{
"settings" : {
"number_of_shards" : 1
}
}'
curl -XPOST localhost:9200/posts/post/_mapping -d '
{
"post" : {
"properties" : {
"arabic_text" : { "type" : "string", "index" : "analyzed", "store" : true, "analyzer" : "arabic" },
"english_text" : { "type" : "string", "index" : "analyzed", "store" : true, "analyzer" : "english" }
}
}
}'
curl -XPUT 'http://localhost:9200/posts/post/200' -d '{
"english_text" : "palestinian1",
"arabic_text" : "فلسطينيه"
}'
curl -XPUT 'http://localhost:9200/posts/post/201' -d '{
"english_text" : "palestinian2",
"arabic_text" : "الفلسطينية"
}'
search for palestinian1
curl -XGET 'http://localhost:9200/posts/post/_search' -d '{
"query": {
"query_string" : {
"analyzer" : "arabic",
"query" : "فلسطينيه"
}
}
}'
search for palestinian2
curl -XGET 'http://localhost:9200/posts/post/_search' -d '{
"query": {
"query_string" : {
"analyzer" : "arabic",
"query" : "الفلسطينية"
}
}
}'
Just add the encoding to your URL, you can do it by specifying the "Content-Type" header as bellow:
Content-Type:text/html;charset=UTF-8
I'm attempting to use the percolation function in elasticsearch. It works great but out of the box there is no stemming to handle singular/plurals etc. The documentation is rather thin on this topic so I was wondering if anyone has gotten this working and what settings are required. At the moment I'm not indexing my documents since I'm not searching them, just passing them through the percolator to trigger notifications.
You can use the percolate API to test documents against percolators without indexing them. However, the percolate API requires and index and a type for your doc. This is so that it knows how each field in your document is defined (or mapped).
Analyzers belong to an index, and the fields in a mapping/type definition can use either globally defined analyzers, or custom analyzers defined for your index.
For instance, we could define a mapping for index test, type test using a globally defined analyzer as follows:
curl -XPUT 'http://127.0.0.1:9200/test/?pretty=1' -d '
{
"mappings" : {
"test" : {
"properties" : {
"title" : {
"type" : "string",
"analyzer" : "english"
}
}
}
}
}
'
Or alternatively, you could setup a custom analyzer that belongs just to the test index:
curl -XPUT 'http://127.0.0.1:9200/test/?pretty=1' -d '
{
"mappings" : {
"test" : {
"properties" : {
"title" : {
"type" : "string",
"analyzer" : "my_english"
}
}
}
},
"settings" : {
"analysis" : {
"analyzer" : {
"my_english" : {
"stopwords" : [],
"type" : "english"
}
}
}
}
}
'
Now we can create our percolator, specifying which index it belongs to:
curl -XPUT 'http://127.0.0.1:9200/_percolator/test/english?pretty=1' -d '
{
"query" : {
"match" : {
"title" : "singular"
}
}
}
'
And test it out with the percolate API, again specifying the index and the type:
curl -XGET 'http://127.0.0.1:9200/test/test/_percolate?pretty=1' -d '
{
"doc" : {
"title" : "singulars"
}
}
'
# {
# "ok" : true,
# "matches" : [
# "english"
# ]
# }