Informatica Lookup & Update strategy error - informatica-powercenter

I have trying a lookup and update strategy transformation but for some reason its not working.
I have a disconnected lookup in my mapping which is called based on a particular condition
and when the look up is null , I tried to set a variable to 'INS',REJ,UPD,DEL..
The new variable in my agg_trans-INS_UPD_DEL is not getting linked to the upd_strategy trans. I'm able to move the variable from expression to update_strategy but dont see the link being displayed..

Well, Aggregator is an active transformation. if you are making a new port in aggregator, you wont be able to surpass this transformation for other ports while going to update strategy.
Without asking you for more of your functionality, you can fix this by simply flowing your ports through aggregator. Whatever you need in update strategy, make sure they flow through aggregator... Src Qualifier -> Aggregator -> Update Strategy.
I hope its clear enough, if not, think about going through documentation about active transformations..

Related

GA3 Event Push Neccesary fields in Request

I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?

Compensating Events on CQRS/ES Architecture

So, I'm working on a CQRS/ES project in which we are having some doubts about how to handle trivial problems that would be easy to handle in other architectures
My scenario is the following:
I have a customer CRUD REST API and each customer has unique document(number), so when I'm registering a new customer I have to verify if there is another customer with that document to avoid duplicity, but when it comes to a CQRS/ES architecture where we have eventual consistency, I found out that this kind of validations can be very hard to address.
It is important to notice that my problem is not across microservices, but between the command application and the query application of the same microservice.
Also we are using eventstore.
My current solution:
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%. That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
Altough this works, there are 2 things that bother me here, the first thing is my command application relying on the query application, so if my query application is down, my command is affected (today I just return false on my validation if query is down but still...) and second thing is, should a query/read model really be able to emit events? And if so, what is the correct way of doing it? Should the command have some kind of API for that? Or should the query emit the event directly to eventstore using some common shared library? And if I have more than one view/read? Which one should I choose to handle this?
Really hope someone could shine a light into these questions and help me this these matters.
For reference, you may want to be reviewing what Greg Young has written about Set Validation.
I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right?
That's exactly right - your read model is stale copy, and may not have all of the information collected by the write model.
That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
This spelling doesn't quite match the usual designs. The more common implementation is that, if we detect a problem when reading data, we send a command message to the write model, telling it to straighten things out.
This is commonly referred to as a process manager, but you can think of it as the automation of a human supervisor of the system. Conceptually, a process manager is an event sourced collection of messages to be sent to the command model.
You might also want to consider whether you are modeling your domain correctly. If documents are supposed to be unique, then maybe the command model should be using the document number as a key in the book of record, rather than using the customer. Or perhaps the document id should be a function of the customer data, rather than being an arbitrary input.
as far as I know, eventstore doesn't have transactions across different streams
Right - one of the things you really need to be thinking about in general is where your stream boundaries lie. If set validation has significant business value, then you really need to be thinking about getting the entire set into a single stream (or by finding a way to constrain uniqueness without using a set).
How should I send a command message to the write model? via API? via a message broker like Kafka?
That's plumbing; it doesn't really matter how you do it, so long as you are sure that the command runs within its own transaction/unit of work.
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%.
No, you cannot safely rely on the query side, which is eventually consistent, to prevent the system to step into an invalid state.
You have two options:
You permit the system to enter in a temporary, pending state and then, eventually, you will bring it into a valid permanent state; for this you could allow the command to pass, yield CustomerRegistered event and using a Saga/Process manager you verify against a uniquely-indexed-by-document-collection and issue a compensating command (not event!), i.e. UnregisterCustomer.
Instead of sending a command, you create&start a Saga/Process that preallocates the document in a uniquely-indexed-by-document-collection and if successfully then send the RegisterCustomer command. You can model the Saga as an entity.
So, in both solution you use a Saga/Process manager. In order for the system to be resilient you should make sure that RegisterCustomer command is idempotent (so you can resend it if the Saga fails/is restarted)
You've butted up against a fairly common problem. I think the other answer by VoicOfUnreason is worth reading. I just wanted to make you aware of a few more options.
A simple approach I have used in the past is to create a lookup table. Your command tries to register the key in a unique constraint table. If it can reserve the key the command can go ahead.
Depending on the nature of the data and the domain you could let this 'problem' occur and raise additional events to mark it. If it is something that's important to the business/the way the application works then you can deal with it either manually or at the time via compensating commands. if the latter then it would make sense to use a process manager.
In some (rare) cases where speed/capacity is less of an issue then you could consider old-fashioned locking and transactions. Admittedly these are much better suited to CRUD style implementations but they can be used in CQRS/ES.
I have more detail on this in my blog post: How to Handle Set Based Consistency Validation in CQRS
I hope you find it helpful.

How to persist unfinished Aggregate Root in invalid state?

I have an aggregate root that needs to be in valid state in order to be used in the system properly. However, the process of building the aggregate is long enough for users to be distracted. Sometimes, all user wants is to configure some part of this big aggregate and then save his work and go home, and tomorrow he will finish aggregate construction.
How can I do this? My PM enforced that we allow aggregates to have invalid state, and then we will check IsValid boolean right before we use it.
I personally went another path: I used Builder pattern for building my aggregate and now I'm planning to persist the builder itself as some intermediary state.
I have an aggregate root that needs to be in valid state in order to be used in the system properly. However, the process of building the aggregate is long enough for users to be distracted. Sometimes, all user wants is to configure some part of this big aggregate and then save his work and go home, and tomorrow he will finish aggregate construction.
How can I do this?
You have two aggregates -- one is the "configuration" that the users can edit at their own pace. The other is the live running instance built from a copy of the configuration, but only if that configuration satisfies the invariant of the running instance.
By the way, there can be a situation where the "running" aggregate
should be edited again (in fact, this is frequent). Then it can become
invalid again.
You have two obvious options here:
Model every undergoing creation/change process as a different aggregate (perhaps even in a different Bounded Context -- could be CRUD). That allows you to free your main aggregate from contextual validation.
Use the same aggregate instance, but in different states. For instance, you may have some fields as Optional<T> which could be empty when the aggregate is in the draft state, but can't when it is published. The state pattern could possibly be useful here.

How do I determine list of valid events for current state

When represeting a statemachine in UI it will be useful to know the list of valid events for the current state in order to disable/hide invalid options.
I would not want to replicate all statemachine rules.
We have had this issue gh-44 open a long time but I haven't found any reliable way to look into a future and predict what a machine would do.
Problem comes from a fact that with nested hierarchical states, same event can be handled by any state starting from active one into its parent structure. As transition(s) can have Guards which may evaluate conditions dynamically, even if we'd add some sort of a prediction engine, it would still be impossible to predict future as we don't have a time machine.

Need to Query Novell NDS Edirectory Attributes' Meta Data not available through LDAP

Ok so this question is a hard one to answer for those who do not have much experience with Novell eDirectory.
What I am trying to do is create a work flow that will delete user objects in my eDirectory tree if upon meeting of certain requirements. The problem is that one of these requirements is dependant on a timestamp. The attribute I am looking at is the LoginDisabled attribute. This is a Boolean value, so either True or False. When looking at this attribute via LDAP methods you only get back the Boolean which is fine.
However my requirements as set forth by internal policy state that Only accounts that have been set to True for a minimum amount of 30 days can have actions performed against them. The only place I can see this timestamp is through the NDS iMonitor tool.
So my question is how do I query this timestamp that is stored outside of LDAP without having to look up each user individually in iMonitor?
If possible I would prefer a script that utilizes Powershell, but I can also use C# or Python.
Yes there are other things that are capable of being done to extend the schema and what have you but for the sake running down a rabbit hole, lets just say that modifications to the server configurations are not authorized. I am only allowed to query and It appears I need to be able to query NDS directly.
We are using loginTime attribute.
After the ban connections (set LoginDisabled attribute to TRUE) the user can not connect to a Tree. We wait 35 days after the last login and delete user.
It is not possible to get this attribute over LDAP by default. However, you can try to add the attribute to LDAP. I cannot test this for I have no test server at the moment. But the theory is that you go into iManager, find the LDAP Group object for the server you want to use for LDAP.
Then click the object, goto General, Tab Attribute Map.
In there, add the attribute you want and map it.

Resources