How transfer a specific secondary key, if I have two of them? - tarantool

How in alter() transfer a specific secondary key, if I have two secondary key.

You can specify the required index like this:
box.space.space_name.index.<index name>:alter(...)
Instead of <index name> specify the index you want to alter.

Related

Are composite indexes supported in Memgraph?

Can I create index on more than one property for any given label? I am running query like:
MATCH (n:Node) WHERE n.id = xyz AND n.source = abc;
and I want to know whether I can create multiple property indexes and if not, is there a good way to query for nodes matching multiple properties?
Memgraph does not support composite indexes, but you can create multiple label-property indexes. For example, run
CREATE INDEX ON Node(id);
CREATE INDEX ON Node(source);
To check if they are properly created, run SHOW INDEX INFO;.
Use EXPLAIN/PROFILE query (inspecting and profiling queries) to see different options and test the performance, maybe one label-property index is good enough.

Perfom get query with value other than key

Is it possible for me to use the get query to query for a value other than the primary key? Cuz it seems I can only pass in the id column but is there no way in which i could perform the get query with a column other than the id column.
Or can I just do this with a normal list query using maybe a filter or something? Thanks for any help!
Yes you can issue any DynamoDB query through AppSync. This provides a good introduction that covers PutItem, UpdateItem, and GetItem https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-resolvers.html. If you need to get multiple values by a key then you should use the DynamoDB Query operation https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html#aws-appsync-resolver-mapping-template-reference-dynamodb-query.
When using DynamoDB you need to bake your access patterns into the key schema(s) of your DynamoDB table and secondary indexes. For example, if you want to get a record by "email" then you should create a table where the hash key is "email". You would then be able to perform a GetItem operation by "email". If you need to query by email and have records sorted by date, then you would need a table where the hash key is "email" and the sort key is "date". Etc..
You are able to create secondary indexes and if you want to get a bit more advanced create composite index values and overload indexes to optimize your DynamoDB tables for your access patterns. Checkout the DynamoDB docs to learn more https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes.html.

Are IDs guaranteed to be unique across indices in Elasticsearch 6+?

With mapping types being removed in Elasticsearch 6.0 I wonder if IDs of documents are guaranteed to be unique across indices?
Say I have three indices, all with a "parent" field that contains an ID. Do I need to include which index the ID belongs to or can I just search through all three indices when looking for a document with the given ID?
IDs are not unique across indices.
If you want to refer to a document you need to know both the index name and the ID.
Explicit IDs
If you explicitly set the document ID when indexing, nothing prevents you from using the same ID twice for documents going in different indices.
Autogenerated IDs
If you don't set the ID when indexing, ES will generate one before storing the document.
According to the code, the ID is securely generated from a random number, the host MAC address and the current timestamp in ms. Additional work is done to ensure that the timestamp (and thus the ID sequence) increases monotonically.
To generate the same ID, when the JVM starts a specific random number has to be picked and the document ID must be generated in a specific moment with sub-millisecond precision. So while the chance exists, it's so small that I wouldn't care about it. (just like I wouldn't care about collisions when using an hash function to check file integrity)
Final note: as a code comment notes, the implementation is opaque and could change at any time, so what I wrote might not hold true in future versions.

creating composite index and a mixed index on the same key in titan graph

Lets assume that we have a user vertex and a property user_email. This field has a
unique constraint on it. I have tried creating a mixed index and a composite index on this property key. I was able to achieve that. But is it really a good practice? Can having both types of indices on the same property key have any impact on performance on indexing backend (I am using elastic search)?
That's fine and a common practice. Depending on your use cases it can be the case that you need both of the indexes. In your example it might be that you want to
ensure uniqueness (standard index required)
perform a user login (exact match required)
find all users with a hotmail address (mixed index required)

How to create unique constraint in Elasticsearch database?

I am using elasticsearch as a document database and each record I create has a guid id that the system uses for the record id. Business people want to offer a feature to let the user have their own auto file name convention based on date and how many records were created so far this day/month.
What I need is to prevent duplicate user file names. Is there a way to setup an indexed field to be unique? Like a sql unique constraint?
You'd need to use the field that is supposed to be unique as id for your documents. By default a new document with existing id would override the existing document with same id, but you can switch to op_type=create in order to get back an error if a document with same id already exists.
There's no way to have the same behaviour with arbitrary fields though, only the _id field works that way. I would probably consider handling this logic in the application layer instead of within elasticsearch.
One solution will be to use uniqueId field value for specifying document ID and use op_type=create while storing the documents in ES. With this you can make sure your uniqueId field will have unique value and will not be overridden by another same valued document.
For this, the elasticsearch document says:
The index operation also accepts an op_type that can be used to force a create operation, allowing for "put-if-absent" behavior. When create is used, the index operation will fail if a document by that id already exists in the index.
Here is an example of using the op_type parameter:
$ curl -XPUT 'http://localhost:9200/es_index/es_type/unique_a?op_type=create' -d '{
"user" : "kimchy",
"uniqueId" : "unique_a"
}'
If you run the above request it is ok, but running it the next time will give you an error.
You can use the _id in the column you want to have unique contraint on.
Here is the sample river that uses postgresql. Yo can change the Database Driver/DB-URL according to your usage.
curl -XPUT localhost:9200/_river/simple_jdbc_river/_meta -d "{\"type\":\"jdbc\",\"jdbc\":{\"strategy\":\"simple\",\"poll\":\"1s\",\"driver\":\"org.postgresql.Driver\",\"url\":\"jdbc:postgresql://DB-URL/DB-INSTANCE\",\"user\":\"USERNAME\",\"password\":\"PASSWORD\",\"sql\":\"select t.id as _id,t.name from topic as t \",\"digesting\" : true},\"index\":{\"index\":\"jdbc\",\"type\":\"topic_jdbc_river1\"}}"
So far as to ES 7.5, there is no such extra "constraint" to ensure uniqueness using a custom field in the mapping.
But you still can walk around it via your own application UUID, which could be used directly explicitly as the _id (which is unique implictly) to achieve your goals.
PUT <your_index_name>/_doc/<your_app_uuid>
{
"a_field": "a_value"
}
Another approach might be to generate the string you store in a field that should be unique by integrating an auto-incrementing integer. This way you ensure from the start that your field values are unique.
You would put your file name together like this:
<current day/month>_<auto-incremented integer>
Auto-incrementing integers are not supported by Elasticsearch per se but you could mimic them using this approach. If you happen to use node.js you can use the es-sequence module.

Resources