How to remove one key in JanusGraph or Titan Mixed index? - janusgraph

For example:
there are three keys in a mixed index named "personIndex", they are "name" and "age" and "uri", how to remove "uri" from personIndex?
I don't find any way or any methods to do this in the sourcecode or in the JanusGraph's Documentation .
thank you very much !

It is not possible to remove a key from an index that was already defined. You have to create a new index.

Related

Elasticsearch Child Mapping

I haven't not been able to find the answer to this question anywhere, so I'll try my luck here.
I know that I can map fields to a specific type in ES and that works well.
But say I have an data set:
{
"main":{
"field1": "test",
"field2": 1,
.
.
.
}
}
Where the fields in main are arbitrary and change per document. What I cant seem to find is if there is a way to map all fields no matter what type they are inside main, to text. I can get it work if i explicitly map every fields, but the fields change and can be added at any time, so I can not possibly map them all.
Dynamic template can be used to map fields dynamically added to specific mapping. Please refer to this documentation .

What are aliases in elasticsearch for?

I recently started working in a company that uses Elasticsearch. While most of its concepts are somewhat similar to relational databases and I am able to understand them, I still don't quite get the concept of aliases.
I did not find any such question here and the information provided on the Elasticsearch website did not help much either.
Can someone explain what aliases are for and ideally include an example of a situation where they are needed?
aliases are like soft links or shortcuts to actual indexes
the advantage is to be able to have an alias pointing to index1a while building or re-indexing on index2b and the moment of swapping them is atomic thanks to the alias, to which all code should point
Renaming an alias is a simple remove then add operation within the same API. This operation is atomic, no need to worry about a short period of time where the alias does not point to an index:
[EDIT] as pointed out #wholevinski aliases have other functionalities like:
Multiple indices can be specified for an action ...
all the info is in the page you have linked
[EDIT2] more on why the need/benefit of the atomicity
the key being "zero downtime" https://en.wikipedia.org/wiki/Zero_unscheduled_downtime or https://en.wikipedia.org/wiki/High_availability
https://www.elastic.co/guide/en/elasticsearch/guide/current/index-aliases.html
We will talk more about the other uses for aliases later in the book. For now we will explain how to use them to switch from an old index to a new index with zero downtime.
#arhak covered the topic pretty well.
One use case that (at least) made me understand the value of indices was the need to remove out-of-date documents and more specifically when using time-based-indices.
For example, you need to keep the logs of an application for at least one year. You decide to use time-based-indices, meaning you save into indices with the following format: 2018-02-logs, 2018-03-logs etc.. In order to be able to search in every index you create the following alias:
POST /_aliases
{
"actions": [{
"add": {
"alias": "current-logs", "indices": [ "2018-02-logs","2018-03-logs" ]
}
}]
}
And query like:
GET /current-logs/_search
Another advantage is that you can delete the out-of-date values very easily:
POST /_aliases
{
"actions": [
{ "remove": { "alias": "current-logs", "index": "logs_2018-01" }}
]
}
and DELETE /logs_2018-01
Aliases are basically created to group a set of indices and make them accessible regarless the name they have. Is a pointer to a set of indices. You can also apply a query/condition to all of these indices. It is very useful when performing queries or creating dashboards over the same group of indices all the time. In addition, if in the future you change the name of the indices that are part of an alias, the end users will not notice that change since it is for transparent for them and you will only update the pointer.

Enable and Specify ttl for elasticsearch index using elasticsearch-py client

Can someone point me to the right direction to specify the elasticsearch index ttl time for documents, using the elasticsearch-py client ?
I tried the official documentation but that does not looks very helpful.
Did you try giving ttl value as a parameter to this create function ?
The lib you are using must use the API under the cover:
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-ttl-field.html (Note: this looks ** deprecated**, consider using index-per-timeframe as said by ES docs)
It is called ttl, so chances are you will find something by searching for ttl in the github repo of the lib: https://github.com/elastic/elasticsearch-py/search?utf8=%E2%9C%93&q=ttl
This worked well because ttl is a specific string which has little conflicts with other strings...

Indexing and Searching an "array" within an embedded document in MongoDB using Java

Could any one, please tell me how to do -
Indexing and Searching an "array" type within an embedded document in MongoDB using Java.
For example: The outer document id UserDetails and the array is given below
"languages_known" :
[
"English",
"Kannada",
"Hindi",
"German"
]
I referred this : http://docs.mongodb.org/manual/core/index-multikey/#index-type-multikey.
But still I could not do.
Please tell to do Indexing and Searching for the above in Java.
You build an index on array inside a document using below in mongo shell:-
db.collection_name.ensureIndex({languages_known: 1}) // In your case
In java driver you can use:-
collection.createIndex(DBObject keys);
Can you please clarify what did you try and errors that you may have encountered?

Ruby, Neography Gem: Finding nodes via Key/value Pair

I am using Neography. I created an index, and have a node with this property:
But this code returns nil:
#neo.find_node_index("lucene","id_str", "5426722")
What am I doing wrong?
The format is: #neo.get_node_index(index, key, value)
The name of your index happens to be the same name as your key (I am assuming since we can see the index name, but not the key that was used).
#neo.get_node_index("id_str","id_str", "5426722")
You can find some examples on how to do it in the neography GitHub repository.
Have you tried get_node_index instead. It looks like it should have the same functionality for the values you are supplying, since you aren't passing a query?

Resources