g-wan kv store KV_INCR_KEY - key-value-store

How to use the KV_INCR_KEY?
I found a useful feature in gwan api, but without any sample.
I want to add items to the KV store with this as primary key.
Also, how to get the value of this key?

The KV_INCR_KEY value is a flag intended to be passed to k_add().
You get the newly inserted key's value by checking the return value of k_add(). The documentation states:
kv_add(): add/update a value associated to a key
return: 0:out of memory, else:pointer on existing/inserted kv_item struct
This was derived from an idea discussed on the G-WAN forum. And, like for some other flags (timestamp or persistence, for example), it has not not been implemented yet (KV_NO_UPDATE is functional).
Since what follows the next version (focussed on new scripted languages) is a kind of zero-configuration mapReduce, the KV store will get more attention soon.

Related

What is the purpose of RocksDBStore with Serdes.Bytes() and Serdes.ByteArray()?

RocksDBStore<K,V> stores keys and values as byte[] on disk. It converts to/from K and V typed objects using Serdes provided while constructing the object of RocksDBStore<K,V>.
Given this, please help me understand the purpose of the following code in RocksDbKeyValueBytesStoreSupplier:
return new RocksDBStore<>(name,
Serdes.Bytes(),
Serdes.ByteArray());
Providing Serdes.Bytes() and Serdes.ByteArray() looks redundant.
RocksDbKeyValueBytesStoreSupplier is introduced in KAFKA-5650 (Kafka Streams 1.0.0) as part of KIP-182: Reduce Streams DSL overloads and allow easier use of custom storage engines.
In KIP-182, there is the following sentence :
The new Interface BytesStoreSupplier supersedes the existing StateStoreSupplier (which will remain untouched). This so we can provide a convenient way for users creating custom state stores to wrap them with caching/logging etc if they chose. In order to do this we need to force the inner most store, i.e, the custom store, to be a store of type <Bytes, byte[]>.
Please help me understand why we need to force custom stores to be of type <Bytes, byte[]>?
Another place (KAFKA-5749) where I found similar sentence:
In order to support bytes store we need to create a MeteredSessionStore and ChangeloggingSessionStore. We then need to refactor the current SessionStore implementations to use this. All inner stores should by of type < Bytes, byte[] >
Why?
Your observation is correct -- the PR implementing KIP-182 did miss to remove the Serdes from RocksDBStore that are not required anymore. This was fixed in 1.1 release already.

CoreAudio: What is "AudioBox" as contrasted to "AudioDevice"

The header file CoreAudio/AudioHardware.h refers to a class "AudioBox" and indicates that it is distinct from but related to the class "AudioDevice". Searching developer.apple.com yields no hits for AudioBox. There is, unfortunately, a commercial product called AudioBox™, which makes googling for the term painfully low-yield.
Here are the comments containing the references:
kAudioHardwarePropertyBoxList
An array of AudioObjectIDs that represent all the AudioBox
objects currently provided by the system.
kAudioHardwarePropertyTranslateUIDToBox
This property fetches the AudioObjectID that corresponds to the
AudioBox that has the given UID. The UID is passed in via the qualifier as a CFString while the AudioObjectID for the AudioBox is
returned to the caller as the property's data. Note that an error
is not returned if the UID doesn't refer to any AudioBoxes.
Rather, this property will return kAudioObjectUnknown as the value of the property.
The header file: AudioHardwareBase.h contains numerous references to AudioBox, but does not define or explain it, although it associates it with AudioDevice.
Searching the docs via XCode just takes me back to AudioHardwareBase.h.
I can infer that perhaps an "AudioBox" is an audio device that is accessed via a plugin. But this does not appear to be stated anywhere.
So What Is An AudioBox?
An AudioBox is a container of (usually) AudioDevices

$ORDER vs counting to scan global range

I have a choice between two ways of scanning through a key level in a large global array and am trying to figure out if one method is more efficient than the other.
This is a vendor supplied application and database on the Intersystems Caché database platform. It is written in the old MUMPS style and does not use any of Caché's object persistence functions: all data is stored in globals directly and any indexes are application maintained.
There is a common convention for repeating data elements attached to entities where the first record will contain a count of child records and then each child record is numbered sequentially at the next key level. For example:
^GBDATA(12345,100)="3"
^GBDATA(12345,100,1)="A^Record"
^GBDATA(12345,100,2)="B^Record"
^GBDATA(12345,100,3)="C^Record"
Where "12345" is the entity key, and "100" is one of the attached detail types. Note that the first "100" record with no other keys has the count of subrecords. There could be anywhere between 0 and hundreds of subrecords attached. The entities are often very wide and there is a lot of other data besides this subrecord type (not shown in example).
Given an entity key, I want to scan through all the subrecords of one type. Would it be faster to use $ORDER to go through the subkeys or to use a FOR loop to anticipate the key values? Does it matter?
$ORDER method:
SET EKEY=12345
SET SEQ=""
FOR
{
SET SEQ=$ORDER(^GBDATA(EKEY,100,SEQ), 1, ROWDATA)
QUIT:SEQ=""
WRITE ROWDATA,!
}
FOR count method:
SET EKEY=12345
SET LIM=^GBDATA(EKEY,100)
FOR SEQ=1:1:LIM
{
WRITE ^GBDATA(EKEY,100,SEQ),!
}
Does anyone know how $ORDER vs $GET is implemented internally in Caché?
I'm having trouble testing this empirically since we only have one production instance with appropriate data and I can't take it offline to clear the cache. I'm most interested in from-disk performance as opposed to from-cache performance.
You could use %SYS.MONLBL to figure out definitively. My guess is that $ORDER is slightly better.
http://docs.intersystems.com/cache20122/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_monlbl
In regards to your question, "Does anyone know how $ORDER vs $GET is implemented internally in Caché?" The two are completely different functions.
$Order is used for the direction that you're going in when reviewing your ^Global.
$Get is used to pull the data within the ^Global. Below is an example of it's use. I use Cache ObjectScript; however, this should give you a general idea
Global Structure
^People(LastName,FirstName)="Phone"
Global Data
^People(Doe,John)="1035001234"
^People(Smith,Jane)="7405241305"
^People(Wood,Edgar)="7555127598"
Code Sample
SET LASTNAME=0
FOR QUIT:LASTNAME?." " DO
.SET LASTNAME=$ORDER(^People(LASTNAME)) QUIT:LASTNAME?." "
.SET FIRSTNAME=0
.FOR QUIT:FIRSTNAME?." " DO
..SET FIRSTNAME=$ORDER(^People(LASTNAME,FIRSTNAME)) QUIT:FIRSTNAME?." "
..SET PHONE=$GET(^People(LASTNAME,FIRSTNAME))
In the sample provided above, it will start with the first record within the ^People global and then start with the first record within the last name by utilizing $Order. It will then $Get the data for the ^People(LASTNAME,FIRSTNAME) node, which is the phone number.
For some samples and reference areas, check out the following links:
$Get Information
$Order Information

Identify ABRecord records uniquely: Is [ABRecord uniqueId] immutable?

I need to reference ABPerson records from within an application. I use the unique ID provided by
- (NSString *)uniqueId
and attach it to my in-app contact record.
Additionally, I save ABPerson's vCardRepresentation as a fallback. In case the app isn't any longer able to locate the ABRecord using the uniqueID, the app asks the user to recover the adressbook record using the saved vCardRepresentation. All works fine.
Unfortunately, a friend told me, that uniqueId isn't immutable: During a sync, uniqueId may suddenly change.
According to him, somewhere in iOS documentation, Apple explains that no way exists to immutable identify ABPersons using uniqueId. In OS X' Cocoa documentation, I failed to find such a hint.
On a given Mac, may the uniqueId change suddenly? If that's true, what's the correct way to identify ABPerson records from within an external application?
In case the uniqueID isn't immutable, I certainly may assign a custom property with a GUID. Unfortunately, custom fields do not sync.
Certainly, I'd prefer to use uniqueId.
For whats its worth, from Apple's techdoc:
kABUIDProperty
The unique ID for this record. It’s guaranteed never to change, no matter how much the record changes. If you need to store a reference to a record, use this value. Type: kABStringProperty.
Available in Mac OS X v10.2 and later.
Declared in ABGlobals.h.
It looks like the kABUIDProperty approach might not work anymore. I came across this blog entry with more discussion in the comments at: http://blog.clickablebliss.com/2011/11/07/addressbook-record-identifiers-on-mac-and-ios/.
A case in point: If a user decides to turn on iCloud sync, the unique ids in that user's address book will change. If the users turns off iCloud sync, they'll change again.
Addendum: it might be worthwhile looking at the StackOverflow entry here.
Apple's docs do say this (quoted from the link):
"The recommended way to keep a long-term reference to a particular record is to store the
first and last name, or a hash of the first and last name, in addition to the identifier.
When you look up a record by ID, compare the record’s name to your stored name. If they don’t match, use the stored name to find the record, and store the new ID for the record."

Memcached dependent items

I'm using memcahced (specifically the Enyim memcached client) and I would like to able to make a keys in the cache dependant on other keys, i.e. if Key A is dependent on Key B, then whenever Key B is deleted or changed, Key A is also invalidated.
If possible I would also like to make sure that data integrity is maintained in the case of a node in the cluster fails, i.e. if Key B is at some point unavailable, Key A should still be invalid if Key B should become invalid.
Based on this post I believe that this is possible, but I'm struggling to understand the algorithm enough to convince myself how / why this works.
Can anyone help me out?
I've been using memcached quite a bit lately and I'm sure what you're trying to do with depencies isn't possible with memcached "as is" but would need to be handled from client side. Also that the data replication should happen server side and not from the client, these are 2 different domains. (With memcached at least, seeing its lack of data storage logic. The point of memcached though is just that, extreme minimalism for bettter performance)
For the data replication (protection against a physical failing cluster node) you should check out membased http://www.couchbase.org/get/couchbase/current instead.
For the deps algorithm, I could see something like this in a client: For any given key there is a suspected additional key holding the list/array of dependant keys.
# - delete a key, recursive:
function deleteKey( keyname ):
deps = client.getDeps( keyname ) #
foreach ( deps as dep ):
deleteKey( dep )
memcached.delete( dep )
endeach
memcached.delete( keyname )
endfunction
# return the list of keynames or an empty list if the key doesnt exist
function client.getDeps( keyname ):
return memcached.get( key_name + "_deps" ) or array()
endfunction
# Key "demokey1" and its counterpart "demokey1_deps". In the list of keys stored in
# "demokey1_deps" there is "demokey2" and "demokey3".
deleteKey( "demokey1" );
# this would first perform a memcached get on "demokey1_deps" then with the
# value returned as a list of keys ("demokey2" and "demokey3") run deleteKey()
# on each of them.
Cheers
I don't think it's a direct solution but try creating a system of namespaces in your memcache keys, e.g. http://www.cakemail.com/namespacing-in-memcached/. In short, the keys are generated and contain the current values of other memcached keys. In the namespacing problem the idea is to invalidate a whole range of keys who are within a certain namespace. This is achieved by something like incrementing the value of the namespace key, and any keys referencing the previous namespace value will not match when the key is regenerated.
Your problem looks a little different, but I think that by setting up Key A to be in the Key B "namespace, if a node B was unavailable then calculating Key A's full namespaced key e.g.
"Key A|Key B:<whatever Key B value is>"
will return false, thus allowing you to determine that B is unavailable and invalidate the cache lookup for Key A.

Resources