Running Laravel Jobs/Handling Events in Order Received - laravel

I have an endpoint: /webhook/events
I'll receive events from a 3rd-party rapidly to this endpoint like below that contains the order information.
Order: Update (4s to process)
Order: Delete (3s to process)
What happens is that when I receive these events, I check to see if the order exists. If the order exists in our DB and it's been soft-deleted, I restore it. In the 3rd-party application, an order can be deleted and restored so I have to reflect this functionality.
My problem is that I may get both of the events above within milliseconds of each other, so in this case, the Update Order/Delete event finishes followed by the Update Order event. This soft-deletes the order, then restores it.
I need to make sure that the events run consecutively instead of racing, but I'm not sure how.

Related

GA3 Event Push Neccesary fields in Request

I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?

Do I need to store last state of object in separate table in Event Sourcing

I'm still learning event sourcing i dont undestand something.
When i get a command to change object, do I first recreate that object from event store than change it and save event, or should i have separate table that holds last state?
What is practice here?
I'm still learning event sourcing i don't understand something. When i get a command to change object, do I first recreate that object from event store than change it and save event, or should i have separate table that holds last state? What is practice here?
The first rule of optimization: Don't.
For handling commands, all of the information that you need to have is stored in your event history; simply loading the history and recomputing any state you need will get the job done.
In the case where you need low latency in your command handler, AND recomputing the state you need from the event history is too slow to meet your service level targets, then you might look into saving a "snapshot", and using that to speed up the load of your data.
Current consensus is that snapshots should be saved separately from the event history (ie: a snapshot is not another kind of event), as though the snapshot were another "read model".

Order of wl_display_dispatch and wl_display_roundtrip call

I am trying to make sense of which one should be called before and which one later between wl_display_dispatch and wl_display_roundtrip. I have seen both order so wondering which one is correct.
1st order:
wl_display_get_registry(display); wl_registry_add_listener() // this call is just informational
wl_display_dispatch();
wl_display_roundtrip();
what i think : wl_display_dispatch() will read and dispatch events from display fd, whatever is sent by server but in between server might be still processing requests and for brief time fd might be empty.
wl_display_dispatch returns assuming all events are dispatched. Then wl_display_roundtrip() is called and will block until server has processed all request and put then in event queue. So after this, event queue still has pending events, but there is no call to wl_display_dispatch(). How those pending events will be dispatched ? Is that wl_display_dispatch() wait for server to process all events and then dispatch all events?
2nd order:
wl_display_get_registry(display); wl_registry_add_listener() // this call is just informational
wl_display_roundtrip();
wl_display_dispatch();
In this case, wl_display_roundtrip() wait for server to process all events and put them in event queue, So once this return we can assume all events sent from server are available in queue. Then wl_display_dispatch() is called which will dispatch all pending events.
Order 2nd looks correct and logical to me, as there is no chance of leftover pending events in queue. but I have seen Order 1st in may places including in weston client examples code so I am confused whats the correct order of calling.
It would be great if someone could clarify here.
Thanks in advance
2nd order is correct.
client can't do much without getting proxy(handle for global object). what i mean is client can send request by binding to the global object advertised by server so for this client has to block until all global object are bind in registry listener callback.
for example for client to create surface you need to bind wl_compositor interface then to shell interface to give role and then shm(for share memory) and so on.wl_display_dispatch cannot guaranty all the events are processed if your lucky it may dispatch all events too but cannot guarantee every-time. so you should use wl_display_roundtrip for registry at-least.

Event sourcing - error handling when events are not created

As to my understanding, in event sourcing, events are recorded. However that would also mean a state changed first happened and thereafter we record the event. For example, assuming:
A Client sends a command to a server to "Create user".
The server validates the command and creates user i.e. stores new
user in a database.
The server then logs/stores a Created User event. i.e event
sourcing.
Created User event is propagated to subscribers
In the scenario above, how do we handle scenarios where step (2) succeeded but step (3) failed due to say network failures, database offline etc? The whole system would be in an indeterminate state now that there was a new user created but the event was never logged. How do we mitigate these types of failures? Or are the steps that I've listed above not the way to do event sourcing?
Thanks!
This is not what happens exactly in Event sourcing, not even in plain CQRS.
In Event sourcing, after the command is validated, the domain events are generated by the source (the Aggregate in DDD) and then they are appended to the Event store in the first step. After that the subscribers (read models, projections, Sagas, external systems) receive and process the new domain events.
In CQRS, after the domain events are generated, they are applied to the Aggregate and then the Aggregate's state and the new events are persisted in the same local transaction, as the first step. Only after that the subscribers receive the events.
So you see? Your situation cannot happen: steps 2 and 3 are persisted atomically, they succeed or fail together.

When do I call mixpanel.people.identify

How do I tell mixpanel the userID of my logged on user?
Do I need to call mixpanel.people.identify() everytime my user logs in, or only the first time that I'm creating them on mixpanel?
If only the first time, how does mixpanel know who to associate events to?
Also, once I have identified the person, will all events be tracable to that person, or do I need to call people.set() explicitly to track a generic event separately from a user-specific event?
You should call mixpanel.people.identify() every time a user logs in. You can even call it every time a page loads in a logged in state if you want.
identify sets some data in a cookie about what distinct_id the library should use when sending people data.
If you have called mixpanel.identify with the same distinct_id as mixpanel.people.identify, events that you send (with mixpanel.track) will show up under the user's profile on the Customers tab. In order for the user to show up at all, though, you will need to call mixpanel.people.set (or .add) at least once.
EDIT: mixpanel.people.identify is no longer necessary; you can just call mixpanel.identify and it will set both.

Resources