I am using this code:
sadd my_set "el1"
to insert "el1" into my_set. sadd only returns the number of inserted elements. What I need is the key of the inserted element so I can retrieve it later. I am sure there is a way that I am not aware of. Is sadd the right function or I should choose something else like set/get?
EDIT: I need something like auto_increment key in mysql. When I insert something, get the last inserted element for further use.
I need something like this:
key: 1
value: {"name": "jack", "tel": "12412415"}
so I could get the array using key = 1.
To do something vaguely similar to "auto_increment," I would look at the INCR function:
http://redis.io/commands/incr
It will increment a value, returning the new value to you - and it is atomic (like most/all Redis commands), so you don't need to worry about threading issues. So your steps would be something like:
SET an increment key.
When you want to add a value, INCR the key, and SET your new value using the value INCR returned.
INCR has at this point increased the value of the increment key, so any repeated value insertions will use the "next" number.
If you want to store a list of items which can be looked up by index, you probably want to do something like this (in programming pseudocode):
// When you initialize your database for the first time.
SET index "0"
// When you want to insert a new item:
INCR index
SET myList:(index value) "My Value"
// When you want to retrieve an item, and you have the index for it:
GET myList:(index value)
In this example, I'm assuming that in your program you are keeping track of the values returned by INCR. The value INCR returns is going to be the index at which you insert the new item, as well as the index with which you'll look up your item later. So, in my example code, replace (index value) with the stored value you got back from INCR (how you do this depends on what programming language you're using, of course).
Note that this DOES allow deletion of items in the middle, via DEL myList:(index value), because you're tracking the last index with index, so even if an item is deleted, the last index will still remain the same - this behaves very similarly to "auto increment" fields in most SQL servers.
You really don't want to use sets for this; sets are inherently unordered, and they are not really made to look things up by "key" - items in a set don't even really have a key. Sets are more useful for the other set operations you can perform on them, like SINTER or SDIFF.
Related
I'm trying to write code that does a full table scan in go by using the bigtable.Table.SampleRowKeys RPC method. The table as around 7m rows (verified with cbt), yet the call returns a single key, whereas the documentation mention:
// SampleRowKeys returns a sample of row keys in the table. The returned row keys will delimit contiguous sections of
// the table of approximately equal size, which can be used to break up the data for distributed tasks like MapReduce.
Am I missing something?
Turns out that the returned keys are midpoints, thus if it returns a single key, say [k1], then the ranges are [("", k1), (k1, "")].
What's a good strategy for generating auto-incrementing keys in LevelDB? My goal is to be able to iterate over the keys in the order that they were inserted.
two methods:
use the default comparator, but use a function to convert the index key '1' to something like '000000001', convert '20' to '000000020', so leveldb will place them near each other;
self define a new comparator, which convert the key from type string to type integer, then you can compare the integer.
with any of the above 2 methods, you need to store a key-value pair in the leveldb: current_id ----> integer, or you can store the current id in a new file using mmap.
then, with yourself defined Add() function, after you get the current id from key current_id you can insert a new key-value pair: id ----> value, then you can update the current_id to plus one.
Since a LevelDB instance can only be accessed from one application at a time, you might as well use a 64-bit long and increment it in the application. When opening the DB (and before allowing any writes), to find the last inserted key you can use the SeekToLast() method of the Iterator.
As I just pointed out in a question on integer keys, if you want to use binary integers you need to create a custom Comparator for the database, otherwise you don't get them in ascending binary order. It's not hard but you may have overlooked the need.
I'm not quite sure what you're asking. If the only data you are adding is keys which are supposed to record an entry as a log then yes, just use an integer key.
However, if you are inserting keys you are going to search for some other reason PLUS you want to later iterate them in insertion order, it gets a bit more complex.
Basically you want to insert two keys for each key value, using a prefix to determine whether keys are "value keys" or "ordering keys". e.g., say you have Frank, John, Sally and Amy as keys and use prefix ~N for Name keys and ~I for Iterator keys.
The database looks like the following, note that the "Iterator keys" don't have a value associated with them as we can just get the names out of the key. I've shown it as if you used a string of two digits for the number, rather than using an integer value and needing a special Comparator.
~I00Frank
~I01John
~I02Sally
~I03Amy
~NAmy => Amy's details
~NFrank => frank's details
~NJohn => John's details
~NSally => Sally's details
I have an map function in a view in CouchDB that emits non-unique two array keys, for documents of type message, e.g.
The first position in the array key is a user_id, the second position represents whether or not the user has read the message.
This works nicely in that I can set include_docs=true and retrieve the actual documents. However, I'm retrieving duplicate documents in that case, as you can see above in the view results. I need to be able to write a view that can be queried to return unique messages that have been read by a given user. Additionally, I need to be able to efficiently paginate the resultset.
notice in the image above that [66, true] is emitted twice for doc id 26a9a271de3aac494d37b17334aaf7f3. As far as I can tell, with the keys in my map function, I cannot reduce in such a way that unique documents will be returned.
the next idea I had was to emit doc._id also in the map function and reduce with group_level=exact the result being:
now I am able to get unique document ids, but I cannot get the documents without doing a second query. And even in the case of a second query, it will require a lot of complexity to do pagination like this (at least I think so).
the last idea I came up with is to emit the entire document rather than the doc._id in the third position in the array key, then I can access the entire document and likely paginate. This seems really brutish.
So my question is:
Is #3 above a terrible idea? Is there something I'm missing? Is there a better approach?
Thanks in advance.
See #WickedGrey's comment to the question. The solution is to ensure that I never emit the same key twice for one document. I do this in the map function by keeping track of the keys as I emit them in an array, then skipping the emit if the key exists in the array.
I'm using Windows Azure Table Storage to store millions of entities, however I'm trying to figure out the best solution that easily allows for two things:
1) a search on an entity, will retrieve that entity and at least (pageSize) number of entities either side of that entity
2) if there are more entities beyond (pageSize) number of entities either side of that entity, then page next or page previous links are shown, this will continue until either the start or end is reached.
3) the order is reverse chronological order
I've decided that the PartitionKey will be the Title provided by the user as each container is unique in the system. The RowKey is Steve Marx's lexiographical algorithm:
http://blog.smarx.com/posts/using-numbers-as-keys-in-windows-azure
which when converted to javascript instead of c# looks like this:
pad(new Date(100000000 * 86400000).getTime() - new Date().getTime(), 19) + "_" + uuid()
uuid() is a javascript function that returns a guid and pad adds zeros up to 19 chars in length. So records in the system look something like this:
PK RK
TEST 0008638662595845431_ecf134e4-b10d-47e8-91f2-4de9c4d64388
TEST 0008638662595845432_ae7bb505-8594-43bc-80b7-6bd34bb9541b
TEST 0008638662595845433_d527d215-03a5-4e46-8a54-10027b8e23f8
TEST 0008638662595845434_a2ebc3f4-67fe-43e2-becd-eaa41a4132e2
This pattern allows for every new entity inserted to be at the top of the list which satisfies point number 3 above.
With a nice way of adding new records in the system I thought then I would create a mechanism that looks at the first half of the RowKey i.e. 0008638662595845431_ part and does a greater than or less than comparison depending on which direction of the already found item. In other words to get the row immediately before 0008638662595845431 I would do a query like so:
var tableService = azure.createTableService();
var minPossibleDateTimeNumber = pad(new Date(-100000000*86400000).getTime() - new Date().getTime(), 19);
tableService.getTable('testTable', function (error) {
if (error === null) {
var query = azure.TableQuery
.select()
.from('testTable')
.where('PartitionKey eq ?', 'TEST')
.and('RowKey gt ?', minPossibleDateTimeNumber + '_')
.and('RowKey lt ?', '0008638662595845431_')
.and('Deleted eq ?', 'false');
If the results returned are greater than 1000 and azure gives me a continuation token, then I thought I would remember the last items RowKey i.e. the number part 0008638662595845431. So now the next query will have the remembered value as the starting value etc.
I am using Windows Azure Node.Js SDK and language is javascript.
Can anybody see gotcha's or problems with this approach?
I do not see how this can work effectively and efficiently, especially to get the rows for a previous page.
To be efficient, the prefix of your “key” needs to be a serially incrementing or decrementing value, instead of being based on a timestamp. A timestamp generated value would have duplicates as well as holes, making mapping page size to row count at best inefficient and at worst difficult to determine.
Also, this potential algorithm is dependent on a single partition key, destroying table scalability.
The challenge here would be to have a method of generating a serially incremented key. One solution is to use a SQL database and performing an atomic update on a single row, such that an incrementing or decrementing value is produced in sequence. Something like UPDATE … SET X = X + 1 and return X. Maybe using a stored procedure.
So the key could be a zero left padded serially generated number. Split such that say the first N digits of the number is the partition key and remaining M digits are the row key.
For example
PKey RKey
00001 10321
00001 10322
….
00954 98912
Now, since the rows are in sequence it is possible to write a query with the exact key range for the page size.
Caveat. There is a small risk of a failure occurring between generating a serial key and writing to table storage. In which case, there may be holes in the table. However, your paging algorithm should be able to detect and work around such instances quite easily by specify a page size slightly larger than necessary or by retrying with an adjusted range.
Could someone please explain, what effect the following process has on tombstones:
1.)Creating a "Row" with Key "1" ("Fields": user, password, date)
2.)Deleting the "Row" with Key "1"
3.)Creating a "Row" with Key "1" ("Fields": user, password,logincount)
The sequence is executed in one thread sequentially (so this happens with a relatively high "speed" = no long pauses between the actions).
My Questions:
1.) What effect does this have on the creation of a tombstone. After 2.) a tombstone is created/exists. But what happens to the existing tombstone, if the new (slightly changed row) is created again under the same key (in process Step 3.)). Can cassandra "reanimate" the tombstones very efficiently?)
2.) How much worse is the process described above in comparison to only very targetly deleting the date "field" and then creating the "logincount" field instead? (It will most likely be more performant. But on the contrary it is much more complex to find out which fields have been deleted in comparison to just simply delete the whole row and recreate it from scratch with the correct data...)
Remark/Update:
What I actually want to do is, setting the "date" field to null. But this does not work in cassandra. Nulls are not allowed for values. So in case I want to set it to null I have to delete it. But I am afraid that this explicit second delete request will have a negative performance impact (compared to just setting it to null)...And as described I have to first find out which fields are nulliefied and foremost had a value (I have to compare all atributes for this state...)
Thank you very much!
Markus
I would like to belatedly clarify some things here.
First, with respect to Theodore's answer:
1) All rows have a tombstone field internally for simplicity, so when the new row is merged with the tombstone, it just becomes "row with new data, that also remembers that it was once deleted at time X." So there is no real penalty in that respect.
2) It is incorrect to say that "If you create and delete a column value rapidly enough that no flush takes place in the middle... the tombstone [is] simply discarded"; tombstones are always persisted, for correctness. Perhaps the situation Theodore was thinking was the other way around: if you delete, then insert a new column value, then the new column replaces the tombstone (just as it would any obsolete value). This is different from the row case since the Column is the "atom" of storage.
3) Given (2), the delete-row-and-insert-new-one is likely to be more performant if there are many columns to be deleted over time. But for a single column the difference is negligible.
Finally, regarding Tyler's answer, in my opinion it is more idiomatic to simply delete the column in question than to change its value to an empty [byte]string.
1). If you delete the whole row, then the tombstone is still kept and not reanimated by the subsequent insertion in step 3. This is because there may have been an insertion for the row a long time ago (e.g. step 0: key "1", field "name"). Row "1" key "name" needs to stay deleted, while row "1" key "user" is reanimated.
2). If you create and delete a column value rapidly enough that no flush takes place in the middle, there is no performance impact. The column will be updated in-place in the Memtable, and the tombstone simply discarded. Only a single value will end up being written persistently to an SSTable.
However, if the Memtable is flushed to disk between steps 2 and 3, then the tombstone will be written to the resulting SSTable. A subsequent flush will write the new value to the next SSTable. This will make subsequent reads slower, since the column now needs to be read from both SSTables and reconciled. (Similarly if a flush occurs between steps 1 and 2.)
Just set the "date" column to hold an empty string. That's what's typically used instead of null.
If you want to delete the column, just delete the column explicitly instead of deleting the entire row. The performance effect of this is similar to writing an empty string for the column value.