How multi-master ldap nodes handle concurrent update? - opendj

How multi master handle concurrent update to an element attribute?
Is it the last update win ?
Thanks

Yes, the last operation is "winning", the same way 2 independent applications would try to update the same attribute on the same server.

Related

is it possible to get detail from web service in every 5min in nifi

I want to make one system that monitor third party from NIFI.....i want to apply filter condition on invoke HTTP processor that get all the alarm from web app in every 5 min but will not fetch the duplicate alarm that already fetched before is it possible ? or else is it possible when the alarm will come in to my web app then workflow in NIFI will trigger and do some task like update assign or delete.
THANK in advance
You can use DetectDuplicate to determine if an incoming flowfile is an exact copy of a recently-seen flowfile (as determined by a value computed from the flowfile attributes), which leverages the distributed state cache. This cache is pluggable and can be provided by a native implementation, HBase, or Redis.

couchbaseTemplate.save(List<Employee>) wanted to save multiple Object in one go

I wanted to save multiple Employee object in couchbase document,but bothered about the use case what will happen when, I have List object of size 5.
suppose when it saved 3 object with 3-documents in couchbase document, some how couchbase server gets down while saving rest 2-document, what will happen in that case.
1) does my all saved document gets rollbacked?
2) does it will also persist another 2 document.?
3) if not both, what will recommended option for this use case.??
From the reference documentation:
Couchbase Server does not support multi-document transactions or rollback.
So neither 1. nor 2. will happen.
If you need such transaction guarantees, you have to either use a database product that supports them or implement them on your own.
The typical approach in working with non-transactional stores is, not to rely on consistency. For example by working with idempotent actions, i.e. in the case of a failure you can redo the action.
In the specific example, you might be able to first store the 5 documents, combined as a single document and then split it up in a separate process. The first write is protected by a transaction, and the second process can get repeated until it succeeds.
Adding to #jens-schauder's answer:
If you have at least 3 nodes in your Couchbase cluster, then the issue should not happen.
Say a node goes down, the cluster will automatically failover what this node was master of (1/3rd of data) to the 2 other nodes, thus writing will work seamlessly.

Oracle database as a single synchronization point for two separate web applications

I am considering using an Oracle database to synchronize concurrent operations from two or more web applications on separate servers. The database is the single infrastructure element in common for those applications.
There is a good chance that two or more applications will attempt to perform the same operation at the exact same moment (cron invoked). I want to use the database to let one application decide that it will be the one which will do the work, and that the others will not do it at all.
The general idea is to perform a somehow-atomic and visible to all connections select/insert with node's ID. Only node which has the same id as the first inserted node ID returned by select would be do the work.
It was suggested to me that a merge statement can be of use here. However, after doing some research, I found a discussion which states that the merge statement is not designed to be called
Another option is to lock a table. By definition, only one node will be able to lock the server and do the insert, then select. After the lock is removed, other instances will see the inserted value and will not perform work.
What other solutions would you consider? I frown on workarounds with random delays, or even using oracle exceptions to notify a node that it should not do the work. I'd prefer a clean solution.
I ended up going with SELECT FOR UPDATE. It works as intended. It is important to remember to commit the transaction as soon as the needed update is made, so that other nodes don't hang waiting for the value.

How to organize work pool based on PostgreSQL table?

Suppose I have a big table in PostgreSQL (more than 500Gb) - work pool. Also I have a number of worker processes, getting works from work pool.
What is the most efficient way to release manager, that would return next string from the
'work pool' table as response to workers requests. May be some kind of cursor, iterator or whatever?
UPD I have forgotten one key thing - table is constant. No INSERT or UPDATE operations are allowed. We just reading from it.
PGQ may be or may not be suitable for the problem. It covers similar problem areas, so have a look.
I whanted to be redirected to this and this. Thanks to http://habrahabr.ru/qa/22030/, user ToSHiC and strib.

Informatica integration service treats filter transformation as passive transformations?

I had a question from my training.does the integration service treat the filter or update transformation as passive transformations ?
Ramkumar - what do you mean by "if itegration service treats ..."?
Filter as well update strategy are both active. By definition any transformation that changes the number of rows passing thru it is active. Filter txfn does exactly this.
Update strategy is considered active because it changes row type. Although it doesn't by itself change the number of rows, it marks the rows for delete, insert or update producing the same result.
Ramkumar - What do you mean by saying, "if the integration service treats..." Really, as you said, if filter is active, its active. I dont think there is a possibility yet that integration service can chose to treat a transformation as active or passive on a case to case basis...
The filter is an active transformation, the update strategy passive.

Resources