I'm confused by the way that Mixpanel alias() is supposed to work, despite the fact that Mixpanel have multiple pages attempting to explain it.
According to this page, I should call alias() only once per user, because it will create a one-time mapping from their user ID to the device's generated ID. But shouldn't that mapping be the other way around? Let's say Bob starts my app on his phone and logs in, at which point I call alias() to map all his actions so far to his account. He then goes through the same process on his tablet - I would expect that I can then call alias() on that machine to do the same thing. But the page I mentioned specifically says not to do that, because it will map his user ID to that device's ID now.
I can call identify() on the multiple devices, but that does not link his previous events to his user ID.
I feel like I'm misunderstanding how this whole thing works, but I've now spent a few hours pondering this so I'm hoping it's confused someone else in the past too...
I always understood alias() as mapping the identifiers both ways. I've had a similar case as you. I'm almost sure that it does not matter how many times you alias and in which direction you alias the identifiers.
This is not authoritative though, but rather based on past usage and possibly-flawed understanding.
As they explain on their help documentation:
https://mixpanel.com/help/questions/articles/how-should-i-handle-my-user-identity-with-the-mixpanel-javascript-library
Ideal implementation
The ideal integration that will allow you to track users from anonymous browsing all the way through signup and subsequent logins:
When a new user signs up, call (once)
mixpanel.alias("YOUR_USER_ID")
When a user logs in, call
mixpanel.identify("YOUR_USER_ID")
Applying this to your question, you need to use identify when the user do login with the mobile and another time when he do it with the tablet.
Related
I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?
So, I'm working on a CQRS/ES project in which we are having some doubts about how to handle trivial problems that would be easy to handle in other architectures
My scenario is the following:
I have a customer CRUD REST API and each customer has unique document(number), so when I'm registering a new customer I have to verify if there is another customer with that document to avoid duplicity, but when it comes to a CQRS/ES architecture where we have eventual consistency, I found out that this kind of validations can be very hard to address.
It is important to notice that my problem is not across microservices, but between the command application and the query application of the same microservice.
Also we are using eventstore.
My current solution:
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%. That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
Altough this works, there are 2 things that bother me here, the first thing is my command application relying on the query application, so if my query application is down, my command is affected (today I just return false on my validation if query is down but still...) and second thing is, should a query/read model really be able to emit events? And if so, what is the correct way of doing it? Should the command have some kind of API for that? Or should the query emit the event directly to eventstore using some common shared library? And if I have more than one view/read? Which one should I choose to handle this?
Really hope someone could shine a light into these questions and help me this these matters.
For reference, you may want to be reviewing what Greg Young has written about Set Validation.
I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right?
That's exactly right - your read model is stale copy, and may not have all of the information collected by the write model.
That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
This spelling doesn't quite match the usual designs. The more common implementation is that, if we detect a problem when reading data, we send a command message to the write model, telling it to straighten things out.
This is commonly referred to as a process manager, but you can think of it as the automation of a human supervisor of the system. Conceptually, a process manager is an event sourced collection of messages to be sent to the command model.
You might also want to consider whether you are modeling your domain correctly. If documents are supposed to be unique, then maybe the command model should be using the document number as a key in the book of record, rather than using the customer. Or perhaps the document id should be a function of the customer data, rather than being an arbitrary input.
as far as I know, eventstore doesn't have transactions across different streams
Right - one of the things you really need to be thinking about in general is where your stream boundaries lie. If set validation has significant business value, then you really need to be thinking about getting the entire set into a single stream (or by finding a way to constrain uniqueness without using a set).
How should I send a command message to the write model? via API? via a message broker like Kafka?
That's plumbing; it doesn't really matter how you do it, so long as you are sure that the command runs within its own transaction/unit of work.
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%.
No, you cannot safely rely on the query side, which is eventually consistent, to prevent the system to step into an invalid state.
You have two options:
You permit the system to enter in a temporary, pending state and then, eventually, you will bring it into a valid permanent state; for this you could allow the command to pass, yield CustomerRegistered event and using a Saga/Process manager you verify against a uniquely-indexed-by-document-collection and issue a compensating command (not event!), i.e. UnregisterCustomer.
Instead of sending a command, you create&start a Saga/Process that preallocates the document in a uniquely-indexed-by-document-collection and if successfully then send the RegisterCustomer command. You can model the Saga as an entity.
So, in both solution you use a Saga/Process manager. In order for the system to be resilient you should make sure that RegisterCustomer command is idempotent (so you can resend it if the Saga fails/is restarted)
You've butted up against a fairly common problem. I think the other answer by VoicOfUnreason is worth reading. I just wanted to make you aware of a few more options.
A simple approach I have used in the past is to create a lookup table. Your command tries to register the key in a unique constraint table. If it can reserve the key the command can go ahead.
Depending on the nature of the data and the domain you could let this 'problem' occur and raise additional events to mark it. If it is something that's important to the business/the way the application works then you can deal with it either manually or at the time via compensating commands. if the latter then it would make sense to use a process manager.
In some (rare) cases where speed/capacity is less of an issue then you could consider old-fashioned locking and transactions. Admittedly these are much better suited to CRUD style implementations but they can be used in CQRS/ES.
I have more detail on this in my blog post: How to Handle Set Based Consistency Validation in CQRS
I hope you find it helpful.
I find that some websites have sort of authentication even though no user is logged in. Taking plunker for example, even a non-logged in user can freeze a snippet such that other users cannot modify; whereas the user himself could always modify the snippet even though he opens the link in another browser tab.
My current solution is adding a type field (ie, anonym and normal) in the user model. Then, each time there is no normal user logged in, I systematically generate a unique random ID, register and login as an anonym user. It works, but the shortcoming is there are lots of anonym users in my database.
Does anyone have a better solution? Is there any "standard" way to realize this kind of hidden authentication?
I think method you are looking for is called session id. When you save as anonymous user web app creates a session with a session id which is used to identify the user by link. For example on plnkr it'll be something like https://plnkr.co/edit/session_id?p=catalogue where session_id is some sort of hash.
To freeze the snippet the session id is written into cookies with the flag, saying, for example, that the state is frozen. If you freeze it in Chrome and open in a Chrome's private window or in Firefox on the same computer, you wouldn't be able to unfreeze it. It'll behave the same way as for other users which have no cookies. In fact using session hash for cookies, rather than any user identification is better for security reasons.
Now this approach in a sense isn't any better, than creating anonymous users - you still have to save session records into the database to be able to open session context by link. In fact, it might happen to be simpler in your case to do exactly what you did if user is assumed to be present in lots of use cases and places in the code.
In many cases, however, separation of session from user makes lots of sense as it simplifies keeping session state after login or registration. Say some web stores would empty your basket after you register, causing quite a bit of frustration, especially if you put several small items into it which you now have to find again and put back. Those don't have sessions or don't use them correctly on registration or login.
Otherwise, as I wrote it's pretty much the same and you have to deal with many anonymous sessions which pollute the database unless you have some sort of wise retention policy, depending on you use case. Say, for example, a web site similar to plnkr.co which is used to share code snippets, and post them on sites such as stackoverflow should better keep those sessions while there are users accessing those say at least once a year. So sessions should have access date and policy would be that it's older than 1 year.
Hope it helps.
I have done similar using Local Storage. It allows you to store data on the browser. A user can then open tabs, close browser completely and reopen etc and the data is still there. It would then appear to be saved for them but actually it's just stored on their browser.
This wouldn't allow others to see what they have done though, so not sure if this is quite what you're after.
I wrapped them in functions in case I chose to change them out later, something like this
StoreLocalVariable: function (key, value) {
localStorage.setItem(key, value);
},
GetLocalVariable: function (key) {
return localStorage.getItem(key);
},
Some info including compatibility
https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API/Using_the_Web_Storage_API
I want to look at the effect of having performed a specific action sequence at any (tracked) time in the past on user retention and engagement.
The action sequence is that of performing an optional New User Flow.
This is signalled to Google Analytics via sending it appropriate events. That works fine. The events show up in reports as expected.
My problem is what happens to results when I used these events to create segments. I have tried two different ways of creating a segment based on this in Advanced Segmentations, via Conditions (defining the segment via the end event, filtered over users not sessions), and via Sequences (defining start and end events, again filtered over users not sessions).
What I get when I look at various retention/loyalty reports, using either of these segments, is ever so very clearly a result which is doing this segmentation within session, not across uses sessions. So for NUF completers , I am seeing all my loyalty/recency on Session 1, in which people are most likely to do the NUF, if they ever do it at all. This is not what I want. (Mind you it is something that could be really useful in other context, with another event! But not for the new user flow.)
What are my options for getting what I want? I see two possible ways forward:
Using custom dimensions, assigning a custom dimension value in the code when the New User Flow is completed. However I do not know if this will solve the cross-session persistence problem.
Injecting a UserID, which we do not currently do, and (somehow!) using the reports available when you inject a UserID to do this.
Are either of these paths plausible? Is there a better way forward? Is it silly to even try to do this in Google Analytics? I'm way more familiar with App Tracking solutions (e.g. Flurry, Mixpanel, DeltaDNA) which do this as a matter of course, than with Google Analytics, and the fact this is at the very least awkward in Google Analytics is coming a bit of a surprise.
thanks,
Heather
In a certain situation, I want a request to execute as a user different than the one actually logged in. So, when User A requests a particular page, from code I want to switch it execute under the user account of User B -- only for this single request (or even for a single block of code...)
I've used this:
FormsAuthentication.SetAuthCookie("UserB")
This works, but it's persistent. When I request a new page, I'm now logged in as User B, which is not what I want.
Is this possible? I've RTFM'd up and down for this.
Edit: I may have found an answer, which I posted below. Looking for confirmation or refutation of this solution.
I think I might have found it. I'm not 100% sure, but this works. I just don't know if it's completely correct or if there's something I'm not considering.
HttpContext.Current.User = PrincipalInfo.CreatePrincipal("UserB");
Again, this seems to do what I originally wanted. If you have experience that would prove me right or wrong, please comment.