I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?
I'm developing for a web app that needs to retrieve the last 10 videos of a user(channel).
First approach
Was to use the search endpoint with param 'forMine' ordering by date, but then I figured that maybe that param could retrieve videos uploaded by the user in a diferent channel or whatever...
First result with channel ID and date - 1st Aproach
Second approach
Was to use the search endpoint with param 'channelId' ordering by date, but then I realized that descriptions were incomplete and most importantly there were some videos missing comparing with first aproach, even if the missing videos belonged to same channel (as showed in pics links)
First resutl with channel ID and date - 2nd Aproach
So, then I googled to find some solution and found other way.
Third approach
Was to use the playlistItem endpoint as I found in Google, and seemed ok (I supposed) because it returned same videos that first aproach and consumed less quota but this method left me with doubts as I didn't knew if the videos would be the latest or maybe they would be sorted by position in the playlist and couldn't be trusted to be the most recent
That said, what would be the correct way to get the N most recent videos from a channel, please?
Regardless of the quota consumption (the less quota the better, of course, but an accurate result is essential)
I'm so confussed with the API response...
Thank you so much!
-- EDITED: NEW APPROACH AND FURTHER INVESTIGATIONS --
Fourth approach
Was to use activities endpoint as was stated by #stvar in his answer. I found that this way, as on second approach, there were some videos missing comparing with first and third approaches, and it was required to retrieve everything without 'maxResults' param because there were activities not related to video upload, making mandatory to perform pagination and a self filtering by type 'upload' after retrieving response in order to get N videos (or be confident in getting N videos uploaded in first 50 retrieved items)
Self Investigations
Further investigations and tests bringed me response to the issue of 'missing videos' of some approaches.
The status of that missing videos were 'unlisted', so they were videos uploaded to the channel, property of the channel, uploaded by user of the channel... but not retrieved by some methods that seemed to retrieve only 'public' videos not 'unlisted' (hidden) nor 'private'.
NOTE: I did my test with Google API PHP Client Library, this behaviour seems not to be on 'Try this API' as it returns only 'public' items, so be careful on trust in 'Try this API' results as it seems to use some hidden filters or something...
Also I tested the channel upload playlist to verify that the order can not be changed and has a LIFO sorting
CONCLUSIONS
At this point, my self conclusion is that there is not a proper way to solve this but quite ways to do it in depend of requisites of status and amount of free quota
Search endpoint seems to work all right, if you have a good amount of unused quota (100 each call) that is the direct way and easiest one as you can sort it and filtering as needed by a bunch of params, taking care to use 'forMine' param if you need every uploaded video or 'channelId' if you need only 'listed' and 'public' ones.
PlaylistItems endpoint is a proper way if you are in a quota crisis (1 each call) as the result is sorted by recent date, taking care to do pagination and post filtering if only 'public' videos are needed till retrieve the desired amount of video ids, otherwhise you can go all the way easy.
Note that the date used to order is the upload date not the post date
(thanks to #stvar for bringing this to the attention)
Activity endpoint, also for quota crisis (1 each call), while it could be more accurate than the others if you only want public videos (it is ordered by recent 'first publish date' so not accurate 100% neither ), is for me the one that gives more work, as it retrieves activities other than 'video upload', so you can not skip pagination and post filtering to retrieve the desired amount of video ids, besides that way you only have access, as said before, to public videos (which is fine if that meets your needs).
Anyway, if you need more than 50 ids, you need to make pagination whatever the aproach you use.
Hope this help someone else and thanks so much to contributors
PS: People in charge of the YouTube API, perhaps a filter by state among some others would be interesting, Thanks!!!
You may employ the Activities.list API endpoint, queried with:
mine=true,
part=snippet,contentDetails,
fields=items(snippet(type),contentDetails(upload)), and
maxResults=50.
For to obtain your desired N uploads, you have to implement pagination. That is that you have to successively call the endpoint until you reach N result set items that have snippet.type equal with upload.
Note that you may well use channelId=CHANNEL_ID instead of mine=true, if you're interested about the most recent uploads of a channel identified by its ID CHANNEL_ID rather than your own channel.
According to the docs, you'll get from this endpoint a result set made of Activities resource items that will contain the following info:
contentDetails.upload (object)
The upload object contains information about the uploaded video. This property is only present if the snippet.type is upload.
contentDetails.upload.videoId (string)
The ID that YouTube uses to uniquely identify the uploaded video.
The official docs state that each call to Activities.list endpoint has a quota cost of one unit.
Futhermore, upon obtaining a set of video IDs, you may invoke the Videos.list endpoint with a properly assigned id parameter, for to obtain from the endpoint all the details you need for each and every video of your interest.
Note that if you have a set of video IDs of cardinality K, since the parameter id of Videos.list endpoint can be specified as a comma-separated list of video IDs, then you may reduce the number of calls to Videos.list endpoint from K to floor(K / 50) + (K % 50 ? 1 : 0) by appropriately using the feature of id just mentioned.
According to the official docs, each call to Videos.list endpoint has also a quota cost of one unit.
Clarifications upon OP's request:
Question no. 1: The Activities.list endpoint produces only the activities specified by the Activities resource. The type property enumerates them all:
snippet.type (string)
The type of activity that the resource describes.
Valid values for this property are: channelItem, comment (not currently returned), favorite, like, playlistItem, promotedItem, recommendation, social, subscription, upload, bulletin (deprecated).
Indeed your remark is correct. For example, when getting the most recent 10 uploads, is possible that you'll have to scan a number of pages P of result sets, with P >= 2, until you reached collecting the desired 10 upload items. (Actual tests have confirmed me this to be factual.)
Question no. 2: The Activities.list endpoint produces items that are sorted by publishedAt; just replace the above fields with:
fields=items(snippet(type,publishedAt),contentDetails(upload))
and see that for yourself.
I could make here the following argument justifying the necessity that the items resulted upon the invocation of Activities.list endpoint be ordered chronologically by publishedAt (the newest first). One may note that, indeed, the official docs quoted above do not specify explicitly that ordering condition I just mentioned; but bare with me for a while:
My argument is of a pragmatic kind: if the result set of Activities.list is not ordered as mentioned, then this endpoint becomes useless. This is so, since, in this case, for one to obtain the most recent upload activity would have to fetch locally all the upload activities, for to then scan that result set for the most recent one. Being compelled to fetch all upload activities only for to obtain the newest one is pragmatically a nonsense. Therefore, by way of contradiction, the result set has to be ordered chronologically by publishedAt with the newest being the first.
Question no. 3: Indeed Search.list is not precise -- it has a fuzzy behavior. I can confirm this based on my own experience; but, unfortunately, I cannot point you to official docs (from Google or YouTube) that acknowledge and explain this behavior. As unfortunate as it is, for its users Search.list is completely opaque.
On the other hand, Activities.list is precise -- it has to be like that; if it wouldn't be precise, then that's a serious bug in the implementation (in my educated opinion).
We are working on an application in the compliance/monitoring space where we are monitoring the activity of an individual. Because of this, we want to pull EVERYTHING in a user's Office 365 mailbox - if it has text the user wrote or received, we want it if it is there, even if it was deleted, purged, etc.
We are using the Graph API and have an existing implementation that uses the standard "messages" GET command:
GET https://graph.microsoft.com/v1.0/me/messages
We are making use of the GraphApiClient (Microsoft.Graph v1.9.0), so the code actually looks like this:
IUserMessagesCollectionPage pageOfMessages = _graphClient.Users[userId].Messages.Request(options).Top(batchSize).Expand("attachments").GetAsync().Result;
However, at the very least this does not return any items from any of the RecoverableItems folders. After looking into it, I am now suspicious that there might be other folders that are not returned by this command either. There is quite the list of Well-known folder names and I'm not sure what others might not be included in a generic Messages request.
Based on this post, I know you can request the messages in the missing folders by WellKnownFolderName one at a time like this:
GET https://graph.microsoft.com/v1.0/me/MailFolders/RecoverableItemsDeletions/messages
It even works with the GraphApiClient:
IMailFolderMessagesCollectionPage pageOfMessages = _graphClient.Users[userId].MailFolders["RecoverableItemsDeletions"].Messages.Request(options).Top(batchSize).Expand("attachments").GetAsync().Result;
The problems with this are:
I don't know how to build a comprehensive list of every folder that has messages for the user
Some of the folders (like RecoverableItemsDeletions and ArchiveRecoverableItemsDeletions, for example) can contain duplicates so I would need to use a dictionary to get rid of the duplicates
It would be a lot more expensive to first build a list of relevant folders and then request their contents and their childrens' contents one request at a time.
At scale, a folder-by-folder implementation could be subject to throttling (if we are monitoring enough users with big enough mailboxes)
Does anyone know the best way to do this? Thanks!
Not sure if this is even possible - but we are looking for a way to trigger mailchimp newsletters based on a custom field value in a Wordpress website.
Basically we will have a field value that holds "the number of miles" a person has walked based on the data they enter. We will be calculating the "total miles".....when they reach 100 miles for example we will need an email to trigger from Mailchimp....then 200 miles will trigger a 2nd email and so on....
Does anyone know if this can even be done with Mailchimp? If not is there a better approach to handling this?
THANK YOU!
If you are familiar with Python, I'd recommend using a Jupyter notebook for this to cut down on development work. You could set it to run at regular intervals checking the status of each user (running either on your computer or a server), then updating the merge tag of the status in mailchimp. You can have automations that are triggered when the merge tag of distance is a specific value, say 100 they get the 100 email, 200 they get the 200 email. (You could also do it so when a user hits a certain milestone their merge tag is updated in MailChimp but from my experience that's a little more work.)
Net net there are a few ways to achieve your goal but I think using a Python notebook using pandas to manipulate the data and the mailchimp3 mailchimp API client would be the lightest lift.
TIP: Mailchimp currently has a bug where merge tags information is not always accurately represented in the UI. So for example if via the API you added 500 people with the Distance merge value of 200, and checked that via the UI how many people had a value of 200 for Distance you would likely see an inaccurate number displayed for the count in the UI. If you export the list, you will see the correct number that is reflected in your API update. To be clear, in some cases UI does not display the accurate number for users with that merge tag or value, but if you export the list with that merge tag/value via the UI it should match what you pushed to the API. This is currently an open ticket.
In previous version of MailChimp API there was option to get specific list of members. You was able to send list of emails and get those members.
In version 3.0 there is only option to get ALL members or to get ONE specific members:
/lists/{list_id}/members => get all Members
/lists/{list_id}/members/{subscriber_hash} => get ONE member by Email
So both option is not good. What If I have 100 emails and I want to get that 100 members from my Mailchimp List which have 20k subscribers.
With first option I would need to get all 20k members from the list and then take the 100 which I need? That's bad.
With second option I would need to loop 100 emails and send 100 request to get each member.
Is there any workaround to get list of members querying by multiple emails in v3.0?
In API I can't find filter/query like this neither.
I would make a request to get all the 20k members, and with a simply linq/foreach/filter. With this You will get your 100 members in JSON so I think thats the fastest, and easiest way to do it.
You can use batch operations, but that's almost the same like you loop requests, and harder to get the results and much slower ( need to check the batch is ready what can take several minutes, and wait for it, than get the result url, what will give you a .tar.gz so need to unzip twice, etc) So I think this is a dead idea for "GET" requests.