Importing data older than last transaction into datomic - time

Am I understanding this correctly?
http://docs.datomic.com/transactions.html
You can set :db/txInstant explicitly, overriding the transactor's
clock time. When you do, you must choose a :db/txInstant value that is
not older than any existing transaction, and not newer than the
transactor's clock time. This capability enables initial imports of
existing data
Does this mean I can't add facts into datomic that are older than last transaction in a way that the queries as-of will work?
This seems to be a huge restriction. What to do when I interface with other system? For example I may get data about order from yesterday, but clerk only got time to enter it into system today. If somebody entered any facts today I won't be able to add that old data?
Is adding an explicit time attribute and using it in all queries the only option?

Unfortunately, when you have an application-driven notion of time i.e. the "recorded" time at which some event that you are transacting "now" occurred, you also need an application-driven query mechanism. Look at a similar thread in the Datomic google's group.
So the answer is yes, you'll have to explicitly handle that "recorded" time.

Related

How to check properties before update in elasticsearch?

I've already read official documentation and find no way.
My datas to es are from kafka which sometimes can be out of order. In the past, message from kafka is parsed and directly insert or update ES doc with specific ID. To avoid the older data override the newer data, I have to check whether the doc with specific ID is already exists and some properties of this doc are meet the conditions. Then I do the UPDATE action(or INSERT).
What I'm doing now is 'search before update'.
Before updating a doc, I search from ES with specific ID(included in kafka msg). Then check if this doc meets the conditions(for example, whether update_time is older?). Lastly I update the doc. And I set refresh to true to update index instantly.
What I'm worried about?
It seems Transactional.
If there is only one Thread executing synchronously, is it possible that When I process next message the doc updated in last message process is not refresh at ES?
If I have several Threads consuming kafka message, how to check before update? Can I use script to solve this problem?
If there is only one Thread executing synchronously, is it possible that When I process next message the doc updated in last message process is not refresh at ES?
That is a possibility since indexes are refreshed once in every second (by default), reducing this value is neither recommended nor guaranteed to give you the desired result since Elasticsearch is NOT designed for this.
If I have several Threads consuming kafka message, how to check before update? Can I use script to solve this problem?
You can use script if the number of fields being updated are very limited. Personally I have found script to be best suited for single field update and that too for corner use cases, it should not be used as a general practice. Any more than that and you are running into the same risk as that with stored procedures in the RDBMS world. It makes data management volatile overall and a system which is harder to maintain/extend in the longer run.
Your use case is best suited for optimistic locking support available from Elasticsearch out of the box. Take a look at Elasticsearch Versioning Support for full details.
You can very well use the inbuilt doc version if concurrency is the only problem that you need to solve. If, however, you need more than concurrency (out of order message delivery and respective ES updates) then you should use your application/domain specific field as the inbuilt version wouldn't work as-is.
You can very well use any of the app specific (numeric) field as a version field and use it for optimistic locking during document updates. If you use this approach, please pay special attention to all insert, update, delete operations for that index. Quoting AS-IS from versioning support - when using external versioning, make sure you always add the current version (and version_type) to any index, update or delete calls. If you forget, Elasticsearch will use it's internal system to process that request, which will cause the version to be incremented erroneously
I'll recommend you evaluate the inbuilt version first and use it if it fulfills your needs. It'll make the overall design much simpler. Consider the app specific version as the second option if the inbuilt version does not meet your requirements.
If there is only one Thread executing synchronously, is it possible that When I process next message the doc updated in last message process is not refresh at ES?
Ad 1. It is possible to save data in ElasticSearch and in a short while after receive stale result (before the index is updated)
If I have several Threads consuming kafka message, how to check before update? Can I use script to solve this problem?
Ad 2. If you process Kafka messages in several threads, it would be the best to use business data (eg. some business ids) as partition keys in Kafka to ensure data is processed in order. Remember to use Kafka to consume messages in many threads and don't consume messages by single consumer to fan out later to multiple threads.
It seems it would be best to ensure data is processed in order and then drop checking in Elasticsearch since it is not guaranteed to give valid results.

Retrospective updates

We are planning to use the "history" interaction to support viewing of historical snapshots of a resource (for example, viewing care event details for an encounter as historical snapshots of the encounter)
For example,
GET encounter/{id}/_history/{vid}
We wanted to use the same structure to perform retrospective updates to a particular history entry using a PUT interaction
PUT encounter/{id}/_history/{vid}
However, there seems to be a restriction in doing so as mentioned here
Accordingly, there is no way to update or delete past versions of the
record, except that the metadata can be modified (mainly for access
control purposes)
Is there any other mechanism for performing retrospective updates?
There is no mechanism to adjust history. History does not represent "the history of what occurred". It represents "the set of versions that existed on this server at a particular period of time". As such, short of time travel, there's no meaningful need to change history records. If you wanted to assert multiple separate things about a resource at different times, you could create multiple instances and link them together using Linkage - e.g. A Condition that had one severity for a year, then escalated for 2 years, then went into remission for a year, then came back, then got resolved could be represented using multiple Condition records each with different effective periods. Linkage could be used to indicate that they were all talking about the same Condition. And all could be created "now" as the time when the server first became aware of that historical information.

Cost of time-stamping as a method of concurrency control with Entity Framework

In concurrency, in optimistic concurrency the way to control the concurrency is using a timestamp field. However, in my particular case, not all the fields need to be controlled in respect to concurrency.
For example, I have a products table, holding the amount of stock. This table has fields like description, code... etc. For me, it is not a problem that one user modifies these fields, but I have to control if some other user changes the stock.
So if I use a timestamp and one user changes the description and another changes the amount of stock, the second user will get an exception.
However, if I use the field stock instead of concurrency exception, then the first user can update the information and the second can update the stock without problems.
Is it a good solution to use the stock field to control concucrrency or is it better to always use a timestamp field?
And if in the future I need to add a new important field, then I need to use two fields to control concurrency for stock and the new one? Does it have a high cost in terms of performance?
Consider the definition of optimistic concurrency:
In the field of relational database management systems, optimistic concurrency control (OCC) is a concurrency control method that assumes that multiple transactions can complete without affecting each other, and that therefore transactions can proceed without locking the data resources that they affect. (Wikipedia)
Clearly this definition is abstract and leaves a lot of room for your specific implementation.
Let me give you an example. A few years back I evaluated the same thing with a bunch of colleagues and we realized that in our application, on some of the tables, it was okay for the concurrency to simply be based on the fields the user was updating.
So, in other words, as long as the fields they were updating hadn't changed since they gathered the row, we'd let them update the row because the rest of the fields really didn't matter and and row was going to get refreshed on udpate anyway so they would get the most recent changes by other users.
So, in short, I would say what you're doing is just fine and there aren't really any hard and fast rules. It really depends on what you need. If you need it to be more flexible, like what you're talking about, then make it more flexible -- simple.

Locking records returned by context? Or perhaps a change to my approach

I'm not sure whether I need a way to lock records returned by the context or simply need a new approach.
Here's the story. We currently have a small number of apps that integrate with our CRM. Some of them open a XrmServiceContext and return a few thousand record to perform updates. These scripts are calling SaveChanges along the way but there will still be accounts near the end that will be saved a couple of minutes after the context return them. If a user updates the record during this time, their changes are overwritten by the script.
Is there a way of locking the records until the context has saved the update back or is there a better approach I should be taking?
Kit
In my opinion, this type of database transaction issue is what CRM is currently lacking the most. There is no way to ensure that someone else doesn't monkey with your data, it's always a last-one-in-wins world in CRM.
With that being said, my suggestion would be to only update the attributes you care about. If you're returning all columns for an entity, when you update that entity, you're possibly going to update all the attributes of the entity, even if you only updated one of them.
If you're dealing with a system were you can't tolerate the last-one-in-wins mentality, then you're probably better off not using CRM.
Update 1
CRM 2015 SP1 and above supports Optimistic Updates. Which allows the use of a version number to ensure that no one has updated the record since you retrieved it.
You have a several options here, it just depends on what you want to do. First of all though, if you can move some of these automated processes to off-time hours, then that's the best option.
Another option would be to retrieve each record 1 by 1 instead of by 1000+.
If you are only updating a percentage of the records retrieved, then you would be better off to check before saving if an update occurred (comparing the modified date). If the modified date changed, then you need to do a single retrieve and then save.
At first thought, I would create a field or status that indicates a pending operation and then use JScript in the form OnLoad event to warn/lock the form. When you process completes, it could clear the flag.

Is there a way to fix Oracle query in shared pool

I have a report engine, performing PreparedStatements on Oracle 11, that is a highly prioritized task.
What I see is that first query invocation usually performs much much longer than the same query afterwards (query has different parameters and return different data).
I suppose this is due to hard parsing done by Oracle, on first query invocation.
I wonder, is there a way of hinting to Oracle, that this query is highly prioritized query which would be performed often, and which performance is critical, so it should remain in shared pool, no matter what?
I know that I can fix execution plan in Oracle 11, but I don't want to fix it, I want Oracle still to be able to change it, as system changes, all I want is to exclude query hard parsing.
Perhaps you should change your "I suppose..." into a "I tested and have determined..." :)
The query performance may be affected by more than just parsing; when it executes it has to fetch blocks from disk into the buffer cache - subsequent executions quite possibly are taking advantage of the blocks being found in memory and so are faster.
EDIT: to answer your immediate question - a workaround may be to have a job run periodically that parses the query but doesn't execute it. You might even be able to use this to determine whether parsing or fetching is the locus of the problem.
You can try pinning to shared pool using dbms_shared_pool.keep
But I would first make sure that you have an aging out problem first
Anton,
if your query is using bind variables it will be re-used. The cursor will be cached and as long as it is re-used, it will remain in the cursor cache. Make sure that it uses bind variables. This increases re-usability and scalability.
If you don't trust the rdbms you can pin it using dbms_shared_pool.keep.
See http://psoug.org/reference/dbms_shared_pool.html
You need to find your cursor in order to do so.
Normally there is an other problem that should be fixed.
Ronald.
http://ronr.blogspot.com

Resources