How to see the details of an index in oracle NoSQL? - oracle-nosql

How to see the details of an index ?
I am executing this command to see the indexes for a table
sql-> SHOW INDEXES ON Persons;
indexes
idx_age
idx_areacode
idx_income
idx_state_city_income
but how to see the details, I cannot find a command show indexes detail

There is no command show indexes detail, Use the command describe
sql-> describe as json index idx_state_city_income on Persons;
{
"name" : "idx_state_city_income",
"type" : "secondary",
"fields" : ["address.state", "address.city", "income"]
}

Related

comparing data between different mappings

I am relatively new to Elasticsearch so I apologies if the terms are not accurate. I have a few indexes and a few almost identical indexes but with less fields in the mapping.
(the original indexes has data and the new ones with less fields are empty)
how can I compare the data and insert the relevant documents into the new indexes with less fields?
for example original index mapping:
{
“first_name” : ”Dana”,
“last_name” : ”Leon”,
“birth_date” : “1990-01-09“,
“social_media” : {
“facebook_id” : ”K8426dN”,
“google_id” : ”8764873”,
“linkedin_id” : ”Gdna”
}
}
new mapping with less fields
{
“first_name” : ”Dana”,
“last_name” : ”Leon”,
“social_media” : {
“facebook_id” : ”K8426dN”,
“google_id” : ”8764873”,
“linkedin_id” : ”Gdna”
}
}
Thanks
You can use reindex by script:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#docs-reindex-change-name
In the "script" you'll need to specify the fields, that you want to remove like:
ctx._source.remove("birth_date")"
The second option is to use ingest pipeline with "remove" proccessor:
https://www.elastic.co/guide/en/elasticsearch/reference/current/remove-processor.html, and to do reindex with default pipeline definition into settings, but this will be harder to implement

Use query result as parameter for another query in Elasticsearch

How can I use query result of first query as a parameter to another query in elastic search
lets consider the dummy data
`PUT /_bulk`
`{"index":{"_index":"movies","_id":"2"}}`
`{"name":"abc","old_data":20,"data":0,"old_data_id":"xy"}`
`{"index":{"_index":"movies","_id":"3"}}`
`{"name":"def","old_data":20,"data":2,"old_data_id":"xy"}`
`{"index":{"_index":"movies","_id":"4"}}`
`{"name":"ghi","old_data":20,"data":0,"old_data_id":"yz"}`
`{"index":{"_index":"movies","_id":"4"}}`
`{"name":"jkl","old_data":18,"data":2,"old_data_id":"xy"}`
`{"index":{"_index":"movies","_id":"4"}}`
`{"name":"mno","old_data":18,"data":18,"old_data_id":"rt"}`
I'm using Elasticsearch , I'm trying to use a query result as a parameter for another query just like the result of this SQL query
SELECT name FROM movies
WHERE old_data_id IN (SELECT old_data_id FROM movies WHERE old_data \> 0 AND data = 0);
I am running all this query on my local host of kibana
I had tried to write sql query like this:
POST /_sql?format=txt
{
"query": "SELECT name FROM test_2 WHERE old_data_id IN (SELECT old_data_id FROM test_2 where name='abc')"
}
but i was showing error
{
"error" : {
"root_cause" : [
{
"type" : "parsing_exception",
"reason" : "line 1:32: IN query not supported yet"
}
],
"type" : "parsing_exception",
"reason" : "line 1:32: IN query not supported yet"
},
"status" : 400
}
after research I found this post Can we use result from one query as an input to another query in elasticsearch? but how can I implement it
I wanna write two separate query in elastic search in which first query will give all the old_data_id and in second query we will use this result of first query to find all the name corresponding to that id. How can we do so?
How can we connect two queries ? (Nested queries)
Do we have to store the result of first query somewhere and then use this ?
Do we have to use java or python if yes then how can I do explain if possible plz give code snippet?
Is there any ways to do it or it is not feasible?

can terms lookup mechanism query by other field but id?

here is elasticsearch official website about terms:
https://www.elastic.co/guide/en/elasticsearch/reference/2.1/query-dsl-terms-query.html
As we can see, if we want to do terms lookup mechanism query, we should use command like this:
curl -XGET localhost:9200/tweets/_search -d '{
"query" : {
"terms" : {
"user" : {
"index" : "users",
"type" : "user",
"id" : "2",
"path" : "followers"
}
}
}
}'
But what if i want to do query by other field of users.
Assume that users has some other fields such as name and can i use terms lookup mechanism finding the tweets by giving users name but not id.
I have tried to use command like this:
curl -XGET localhost:9200/tweets/_search -d '{
"query" : {
"terms" : {
"user" : {
"index" : "users",
"type" : "user",
"name" : "Jane",
"path" : "followers"
}
}
}
}'
but it occurs error.
Looking forward to your help. Thank you!
The terms lookup mechanism is basically a built-in optimization to not have to make two queries to JOIN two indices, i.e. one in index A to get the ids to lookup and a second to fetch the documents with those ids in index B.
In contrary to SQL, such a JOIN can only work on the id field since this is the only way to uniquely retrieve a document from Elasticsearch via a GET call, which is exactly what Elasticsearch will do in the terms lookup.
So to answer your question, the terms lookup mechanism will not work on any other field than the id field since the first document to be retrieved must be unique. In your case, ES would not know how to fetch the document for the user with name Jane since name is just a field present in the user document, but in no way a unique identifier for user Jane.
I think you did not understand exactly how this works. Terms lookup query works by reading values from a field of a document with the given id. In this case, you are trying to match the value of field user in tweets index with values of field followers in document with id "2" present in users index and user type.
If you want to read from any other field then simply mention that in "path".
What you mainly need to understand is that the lookup values are all fetched from a field of a single document and not multiple documents.

Query two indexes simultaneously in Kibana 4?

Whenever I create a visualization, Kibana 4 asks me to select the index for doing the search. My project requires searching data that is present in multiple indexes and hence I am stuck. I wish to search two indexes for my data and then visualize them. Any help would be valuable.
Kibana can create Visualization from multiple indexes. But! indexes should have similar names, or alias names with similar names, for example, you can simply grab data from indexes: logstash-2015-01-01 and logstash-2015-01-02 using mask logstash-*.
But yes it would be handy if we could write something like index1,onother_index.
A solution that works in any case: create an alias in Elasticsearch for the indexes you want to query simultaneously and then use the alias as an index-pattern in Kibana.
In the plugin Marvel, through the Sense interface, you can create an alias for multiple indexes by doing this request :
POST _aliases
{
"actions" : [
{ "add" : { "index" : "test1", "alias" : "alias1" } },
{ "add" : { "index" : "test2", "alias" : "alias1" } }
]
}
Or using CURL:
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "add" : { "index" : "test1", "alias" : "alias1" } },
{ "add" : { "index" : "test2", "alias" : "alias1" } }
]
}'
Then, you just need to add an index-pattern in Kibana for "alias1" and create your visualizations.
For more informations on aliases, see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html
Thanks for all the help, But I figured out a way in which this could be done.
In Index Pattern of Kibana 4 create an index Pattern as _all. This index pattern contains all the indexes present in your elasticsearch. Hence when you create a new visualization simply select the _all index pattern there and all the data fields from all the indexes in your elasticsearch are accessible and you can easily use it to create visualizations.
If I understand what you are asking correctly, then it may depend on how you've named your indexes.
I can query multiple logstash indexes, by selecting my pattern 'logstash-*'. When you setup your indexes it gives you the option to specify a pattern.
(Settings => Indices => Index Pattern => Add New)
I hope that helps.
Two wildcards (i.e. *-*) works for me in Kibana 4.
I'm not sure i understand correctly, but I think your best option is to create that visualization on both indexes you want separately, and build a dashboard including both the visualizations.
Kibana can't display a single visualization with searches from two separate indexes.

how to detect changes in database and automatically adding new row to elasticsearch index

What I've already done:
I connected my hbase with elasticsearch via this tutorial:
http://lessc0de.github.io/connecting_hbase_to_elasticsearch.html
And I get index with hbase table content, but after adding new row to hbase, it is not automatically added to elasticsearch index. I tried to add this line to my conf:
"schedule" : "* 1/5 * ? * *"
and mapping:
"mappings": {
"jdbc" : {
"_id" : {
"path" : "ID"
}
}
}
which assigns _id = ID, and ID has unique value in my hbase table.
It's working well: when I add new row to hbase it is uploaded to index in less then 5 minutes. But it is not good for performance, because every 5 minutes it executes a query and doesn't add old content to index only because of _id has to be unique. It is good for small db, but I had over 10 millions row in my hbase table, so my index is working all time.
It is any solution or plugin to elasticsearch to automatically detected changes in db and add only the new row to index?
I create index using:
curl -XPUT 'localhost:9200/_river/jdbc/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:phoenix:localhost",
"user" : "",
"password" : "",
"sql" : "select ID, MESSAGE from test",
"schedule" : "* 1/5 * ? * *"
}
}'
Thanks for help.
You're looking for something called a "river" plugin. There are various around supporting all kinds of databases and even a physical file system. However, the one you're looking for it the HBase River Plugin.

Resources