I added a linked fact using:
context.InsertLinked(longOrderKey, longOrder);
At some point later, I want to remove this fact. It's easy for me to construct the key without having the record:
var longOrderKey = (managedAccount.AccountId, PositionType.Long, fungible.FungibleId);
So why do I need the record when removing a linked fact using the method:
context.RetractLinked(longOrderKey, longOrder);
Why can't this method just use the longOrderKey? What if I don't have the 'longOrder' record. Do I really need to look it up before I can remove it?
Linked facts are tied to an activation that created them. The purpose of the key is to be able to identify the specific fact if the activation produced more than one linked fact. If you are inserting just one linked fact in the RHS of a rule, you can really set the key to anything, e.g. "1"; if you were to insert two facts, you could set keys to "1" and "2", and so on. In essence, the key is to identify the linked fact within the activation. The fact itself is needed, so that the engine can find the corresponding entries in the working memory. Much like ISession.Retract requires the fact object, so that it can find it in the working memory.
Another point is that in most scenarios you should not need to retract the linked facts as they would get retracted automatically, once the activation gets deleted (i.e. the conditions that created the activation become false).
Related
The specification for FHIR documents seems to mandate that all bundle entries in the document resource be part of the reference graph rooted at the Composition entry. That is, they should be the source or the target of a reference relation that traces all the way up to the root entry.
Unfortunately I have not been able to locate all the relevant passages in the FHIR specification; one place where it is spelled out is in 3.3.1 Document Content, but it is not really clear whether this pertains to all bundles of type 'document' (i.e. even those that happen to be bundles with type code 'document' but are merely collections of machine-processable data without any aspirations to represent a FHIRy document).
The problem with the referencedness requirement lies in the fact that the HAPI validator employs linear search for checking the references. So, if we have to ship N bundle entries full of data to a payor, we have to include a list with N references (one for each data-bearing bundle entry). That leads to N reference searches with O(N) effort during validation, which makes the reference checking complexity effectively quadratic in the number of entries.
This easily brings even the most powerful computers to their knees. Current size contraints effectively cap the number of entries per file at roughly 25000, and the HAPI validator needs several hours to chew through that, even on the most powerful CPUs currently available. Without the references, validation would take less than a minute for the same file.
In our use case, data-bearing entries have no identity outside of the containing bundle file. Practically speaking they would need neither entry.fullUrl nor entry.resource.id, because their business identifiers are contained in included base64 blobs. However, presence or absence of these identifiers has no practical influence on the time needed for validation (fractions of a second even for a 1 GB file), so who cares. It's the list of references that kills the HAPI validator.
Perhaps it would be possible to fulfil the letter of the referencedness requirement by making all entries include a reference to the Composition. The HAPI validator doesn't care either way, so I don't know whether that would be valid or not. But even if it were FHIRly valid, it would be a monstrously silly workaround.
Is there a way to ditch the referencedness requirement? Perhaps by changing the bundle type to something like 'collection', or by using contained resources?
P.S.: for the moment we are using a workaround that cuts the time for validation from hours to less than a minute, but it's a hack, and we currently don't have the resources to fix the HAPI validator. What I'm mostly concerned about is the question how the specifications (profiles) need to be changed in order to avoid the problem I described.
(i.e. even those that happen to be bundles with type code 'document' but are merely collections of machine-processable data without any aspirations to represent a FHIRy document)
If it is not a document, and not intended to be one, do not use the 'document' Bundle type. If you do, you would me misrepresenting the data which is what FHIR tries to avoid.
It seems like you want to send a collection of resources that are not necessarily related, so
Is there a way to ditch the referencedness requirement? Perhaps by changing the bundle type to something like 'collection'
Yes, I would use 'collection', or maybe a 'batch/transaction' depending on what I want to tell the receiver to do with the data.
The documents page says:
The document bundle SHALL include only:
The Composition resource, and any resources directly or indirectly (e.g. recursively) referenced from it
A Binary resource containing a stylesheet (as described below)
Provenance Resources that have a target of Composition or another resource included in the document
A document is a frozen set of content intended as an attested, human-readable, frozen set of content. If that's not what you need, then use a different Bundle type. However, if you do need the 'document' type, that doesn't mean that systems should necessarily validate all requirements at runtime
I keep seeing code like this:
.WebButton("locator1", "locator2", "locator3")
What is the type of arguments in WebButton, WebElement, WebEdit etc ? I tried passing an array to .WebButton. So, qtp told me its not the correct type. Is there an alternate way to pass multiple locators ?
Also, what is the return type of .WebButton, .WebElement etc. ?
The "arguments" which you are talking about are the set of properties which are needed by QTP/UFT to identify that particular object(WebElement, WebEdit etc.) uniquely so that one can perform actions on them.
Also, this is not some sort of a function that is going to return you any value.
If you are not sure about what properties you need to mention in those brackets, an easier way would be to add that object to the Object Repository and drag that object from OR to your script. After that you can perform any action on those objects.
If you do not want to use OR, then you need to make use of, what we call, Descriptive Programming(DP) wherein you have to mention the object property names and their values "explicitly" in the script.
Remember that the sole purpose of mentioning these properties is to help QTP identify the objects in your application so that you can perform actions on them(like click, set etc.)
Here are a few links which can help you:
http://www.learnqtp.com/descriptive-programming-simplified/
http://www.guru99.com/quick-test-professional-qtp-tutorial-6.html
http://www.guru99.com/quick-test-professional-qtp-tutorial-32.html
EDIT 2 - for answering your question in the comment:
.WebButton("Locator1","Locator2","Locator3") means .webButton("property1:=value1","property2:=value2","property3:=value3")
Now, I could have only mentioned property-value pair1(which you are referring to as "Locator1") only If it alone was sufficient for identifying that webbutton. If only 1 property-value pair cannot help UFT in UNIQUELY recognizing the webbutton, then I have to provide another property-value pair until I have provided enough of them so that QTP recognizes that webbutton uniquely. Since, I have provided multiple property-value pairs(or locators), they have to be separated by commas. If there was only one property-value pair, no comma is needed. All this explanation only applies to the case when we are using Descriptive Programming. If we are not using Descriptive programming, then in that case the objects and their properties&values are stored in the Object Repository and you just have to mention their logical names(say Button1 as stored in OR) in the script like:
.webButon("Button1")
To understand more, you need to do some more research on "How object Identification works in UFT/QTP"
I would like to access the underlying lists (before and after) based on detection of a ListChange when diffing two entities. All I have direct access to is the list elements that have changed through ContainerElementChange.
Other PropertyChange types accommodate this, but unfortunately ListChange does not appear to.
Is this possible?
No, there is no access to underlying lists from ListChange items.
We had some discussion about this in last year (see https://github.com/javers/javers/issues/202) concerning ArrayChange.
You can put a feature request to our issues.
Is it possible, using the AWS Ruby SDK (or just DynamoDB in general), to get an item or items from a table that uses a primary key only, and where that primary key ends with a certain string?
I haven't come across anything in the docs that explicitly answers this question, either in the ruby ddb docs or the general docs for ddb. I'm not saying the question is not answered, but if it is, I can't find it.
If it is possible, could someone provide an example for ruby or link to the docs where an example exists?
Although #Ryan is correct and this can be done with query, just bear in mind that you're doing a "full-table-scan" here. That might be OK for a one-time job but probably not the best practice for a routine task (and of course not as a part of your API calls).
If your use-case involves quickly finding objects based on their suffix in a specific field, consider extracting this suffix (assuming it's a fixed-size suffix) as another field and have a secondary index on that one. If you want to query arbitrary length suffixes, I would create a lookup table and update it with possible suffixes (or some of them, to save some calls, and then filter when querying).
It looks like you would want to use the Query method on the SDK to find the items your looking for. It seems that "EndsWith" is not available as a comparison operator in the SDK though. So you would need to use CONTAINS and then check your results locally.
This should lead to the best performance, letting DynamoDb do the initial heavy lifting and then further pruning the results once you receive them.
http://docs.aws.amazon.com/sdkforruby/api/Aws/DynamoDB/Client.html#query-instance_method
I'm trying to make an algorithm that easily simplifies and groups synonyms (with mismatches, capitals, acronims, etc) into only one. I supose there should exist a standard way to build such a structure that, looking for a string with possible mismatches, if the string exists in the structure, it returns a normalized string key. In short, sometimes the same concept could be written in several ways, but I only want to keep the concept.
For instance: Supose I want to normalize or simplify the appearances of
"General Director", "General Manager", "G, Dtor", "Gen Dir", ...
into
"GEN_DIR"
and keep only this result for further reference.
By the way, I suppose that building a Hash with key/value pairs like
hash["General Director"]="GEN_DIR"
hash["General Manager"]="GEN_DIR"
hash["G, Dtor"]="GEN_DIR"
hash["G, Dir"]="GEN_DIR"
could be a solution, but I suspect that there are more elegant or adequate solutions to that.
I would also need the way to persist this associative structure easily without any database because it should grow as I find more mismatches of the same word or sentence. A possible approach I think is to define this structure by means of a DSL, but I'm open to suggestions.
Well, there is no rule, at least a clear one.
My aim is to scrap from web some "structured" data that sometimes is incorrectly or incompletely typed. Some fields are descriptions and can be left as is. But some fields are suposedly to be "sets" but aren's correctly typed (as in my example). As a human can read that, he immediatelly knows what it means and can associate that with its meaning.
But I would like to automate as much as possible the process of reducing those possible mismatches to only one "string" (or symbol) before, for instance, saving it into a database. So, what I would need is a kindof hash or dictionary, as sawa correctly stated, that I can use to lookup any of such dirty strings to get the normalized string or symbol.
Also, of course, it would be desirable a way to make this hash (or whatelse it could be) to learn from new mismatches in some way and add a new association automatically (possibly it could be based on a distance measure between mismatched string and normalized string that, if lower than X, a new association is built). The whole association (i.e, hash) should grow as new mismatches and concepts arise and, though, it should be kept anywhere (possibly in an xml file, or something like what Mori answered below) for future uses.
Any new Idea?