Google Calendar API : event update with the Ruby gem - ruby

I'm using https://github.com/google/google-api-ruby-client to connect to different google API in particular the Google Calendar one.
Creating an event, updating it and deleting it works most of the time with what one can usually find around.
The issue appears when one tries to update an event details after a previous update of the dates of the event.
In that case, the id provided is not enough and the request fails with an error :
SmhwCalendar::GoogleServiceException: Invalid sequence value. 400
Yet the documentation does not mention such things : https://developers.google.com/google-apps/calendar/v3/reference/calendars/update
The event documentation does describe the sequence attribute without saying much : https://developers.google.com/google-apps/calendar/v3/reference/events/update
What's needed to update an event ?
Is there specific attributes to keep track of when creating, updating events besides the event id ?
how is the ruby google api client handling those ?

I think my answer from Cannot Decrease the Sequence Number of an Event applies here too.
Sequence number must not decrease (and if you don't supply it, it's the same as if you supplied 0) + some operations (such as time changes) will bump the sequence number. Make sure to always work on the most recent copy of the event (the one that was provided in the response).

#luc answer is pretty correct yet here are some details.
Google API documentation is unclear about this (https://developers.google.com/google-apps/calendar/v3/reference/events/update).
You should consider that the first response contains a sequence number of 0.
The first update should contain that sequence number (alongside the title, and description etc ). The response to that request will contain an increment sequence number (1 in this case) that you should store and reuse on the next update.
While the first update would imply a sequence number of 0 (and work) if you don't pass any the second might still pass but the third will probably not (because it's expecting 1 as sequence).
So that attribute might appear optional but it is actually not at all optional.

Related

D365 Same Tracking Token was assigned to Email/Case in Customer Service

One customer had a problem where an incorrect email (from another customer) was assigned to a case. The incorrectly assigned email is a response to a case that was deleted. However, the current case has the same tracking token as the deleted one. It seems that the CRM system uses the same tracking token as soon as it is available again. This should not happen! Here Microsoft has a real programming error from our point of view. The only solution we see is to increase the number of numbers to the maximum so that it takes longer until all tracking tokens are used up. But in the end, you still reach the limit.
Is there another possibility or has Microsoft really made a big mistake in the way emails are allocated?
We also activated Smart Matching, but that didn't help in this case either, because the allocation was made via the Tracking Token first.
Thanks
The structure of the tracking token can be configured and is set to 3 digits by default. This means that as soon as 999 emails are reached, the tracking token starts again at 1, which is basically a thinking error on Microsoft's part.
If you have set "Automatic replies", these will be reached in the shortest possible time. We therefore had to increase the number to 9 digits, which is also not a 100% solution. At some point, this number of emails is also reached and then emails are again assigned to requests that do not belong together. Microsoft has to come up with another solution.

GA3 Event Push Neccesary fields in Request

I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?

Properly way for obtaining the N latest videos of channel : search vs playlistItem vs activities endpoints

I'm developing for a web app that needs to retrieve the last 10 videos of a user(channel).
First approach
Was to use the search endpoint with param 'forMine' ordering by date, but then I figured that maybe that param could retrieve videos uploaded by the user in a diferent channel or whatever...
First result with channel ID and date - 1st Aproach
Second approach
Was to use the search endpoint with param 'channelId' ordering by date, but then I realized that descriptions were incomplete and most importantly there were some videos missing comparing with first aproach, even if the missing videos belonged to same channel (as showed in pics links)
First resutl with channel ID and date - 2nd Aproach
So, then I googled to find some solution and found other way.
Third approach
Was to use the playlistItem endpoint as I found in Google, and seemed ok (I supposed) because it returned same videos that first aproach and consumed less quota but this method left me with doubts as I didn't knew if the videos would be the latest or maybe they would be sorted by position in the playlist and couldn't be trusted to be the most recent
That said, what would be the correct way to get the N most recent videos from a channel, please?
Regardless of the quota consumption (the less quota the better, of course, but an accurate result is essential)
I'm so confussed with the API response...
Thank you so much!
-- EDITED: NEW APPROACH AND FURTHER INVESTIGATIONS --
Fourth approach
Was to use activities endpoint as was stated by #stvar in his answer. I found that this way, as on second approach, there were some videos missing comparing with first and third approaches, and it was required to retrieve everything without 'maxResults' param because there were activities not related to video upload, making mandatory to perform pagination and a self filtering by type 'upload' after retrieving response in order to get N videos (or be confident in getting N videos uploaded in first 50 retrieved items)
Self Investigations
Further investigations and tests bringed me response to the issue of 'missing videos' of some approaches.
The status of that missing videos were 'unlisted', so they were videos uploaded to the channel, property of the channel, uploaded by user of the channel... but not retrieved by some methods that seemed to retrieve only 'public' videos not 'unlisted' (hidden) nor 'private'.
NOTE: I did my test with Google API PHP Client Library, this behaviour seems not to be on 'Try this API' as it returns only 'public' items, so be careful on trust in 'Try this API' results as it seems to use some hidden filters or something...
Also I tested the channel upload playlist to verify that the order can not be changed and has a LIFO sorting
CONCLUSIONS
At this point, my self conclusion is that there is not a proper way to solve this but quite ways to do it in depend of requisites of status and amount of free quota
Search endpoint seems to work all right, if you have a good amount of unused quota (100 each call) that is the direct way and easiest one as you can sort it and filtering as needed by a bunch of params, taking care to use 'forMine' param if you need every uploaded video or 'channelId' if you need only 'listed' and 'public' ones.
PlaylistItems endpoint is a proper way if you are in a quota crisis (1 each call) as the result is sorted by recent date, taking care to do pagination and post filtering if only 'public' videos are needed till retrieve the desired amount of video ids, otherwhise you can go all the way easy.
Note that the date used to order is the upload date not the post date
(thanks to #stvar for bringing this to the attention)
Activity endpoint, also for quota crisis (1 each call), while it could be more accurate than the others if you only want public videos (it is ordered by recent 'first publish date' so not accurate 100% neither ), is for me the one that gives more work, as it retrieves activities other than 'video upload', so you can not skip pagination and post filtering to retrieve the desired amount of video ids, besides that way you only have access, as said before, to public videos (which is fine if that meets your needs).
Anyway, if you need more than 50 ids, you need to make pagination whatever the aproach you use.
Hope this help someone else and thanks so much to contributors
PS: People in charge of the YouTube API, perhaps a filter by state among some others would be interesting, Thanks!!!
You may employ the Activities.list API endpoint, queried with:
mine=true,
part=snippet,contentDetails,
fields=items(snippet(type),contentDetails(upload)), and
maxResults=50.
For to obtain your desired N uploads, you have to implement pagination. That is that you have to successively call the endpoint until you reach N result set items that have snippet.type equal with upload.
Note that you may well use channelId=CHANNEL_ID instead of mine=true, if you're interested about the most recent uploads of a channel identified by its ID CHANNEL_ID rather than your own channel.
According to the docs, you'll get from this endpoint a result set made of Activities resource items that will contain the following info:
contentDetails.upload (object)
The upload object contains information about the uploaded video. This property is only present if the snippet.type is upload.
contentDetails.upload.videoId (string)
The ID that YouTube uses to uniquely identify the uploaded video.
The official docs state that each call to Activities.list endpoint has a quota cost of one unit.
Futhermore, upon obtaining a set of video IDs, you may invoke the Videos.list endpoint with a properly assigned id parameter, for to obtain from the endpoint all the details you need for each and every video of your interest.
Note that if you have a set of video IDs of cardinality K, since the parameter id of Videos.list endpoint can be specified as a comma-separated list of video IDs, then you may reduce the number of calls to Videos.list endpoint from K to floor(K / 50) + (K % 50 ? 1 : 0) by appropriately using the feature of id just mentioned.
According to the official docs, each call to Videos.list endpoint has also a quota cost of one unit.
Clarifications upon OP's request:
Question no. 1: The Activities.list endpoint produces only the activities specified by the Activities resource. The type property enumerates them all:
snippet.type (string)
The type of activity that the resource describes.
Valid values for this property are: channelItem, comment (not currently returned), favorite, like, playlistItem, promotedItem, recommendation, social, subscription, upload, bulletin (deprecated).
Indeed your remark is correct. For example, when getting the most recent 10 uploads, is possible that you'll have to scan a number of pages P of result sets, with P >= 2, until you reached collecting the desired 10 upload items. (Actual tests have confirmed me this to be factual.)
Question no. 2: The Activities.list endpoint produces items that are sorted by publishedAt; just replace the above fields with:
fields=items(snippet(type,publishedAt),contentDetails(upload))
and see that for yourself.
I could make here the following argument justifying the necessity that the items resulted upon the invocation of Activities.list endpoint be ordered chronologically by publishedAt (the newest first). One may note that, indeed, the official docs quoted above do not specify explicitly that ordering condition I just mentioned; but bare with me for a while:
My argument is of a pragmatic kind: if the result set of Activities.list is not ordered as mentioned, then this endpoint becomes useless. This is so, since, in this case, for one to obtain the most recent upload activity would have to fetch locally all the upload activities, for to then scan that result set for the most recent one. Being compelled to fetch all upload activities only for to obtain the newest one is pragmatically a nonsense. Therefore, by way of contradiction, the result set has to be ordered chronologically by publishedAt with the newest being the first.
Question no. 3: Indeed Search.list is not precise -- it has a fuzzy behavior. I can confirm this based on my own experience; but, unfortunately, I cannot point you to official docs (from Google or YouTube) that acknowledge and explain this behavior. As unfortunate as it is, for its users Search.list is completely opaque.
On the other hand, Activities.list is precise -- it has to be like that; if it wouldn't be precise, then that's a serious bug in the implementation (in my educated opinion).

Is there any way to replay events in a date range?

I am implementing an example of spring-boot and axon. I have two events
(deposit and withdraw account balance). I want to know is there any way to get the state of the Account Aggregate by a given date ?
I want to get not just the final state, but to replay events in a range of dates.
I think I can help with this.
In the context of Axon Framework, you can start a replay of events by telling a given TrackingEventProcessor to 'reset' it's Tokens. By the way, the current description on this in the Reference Guide can be found here.
These TrackingTokens are the objects which know how far a given TrackingEventProcessor is in terms of handling events from the Event Stream. Thus resetting/adjusting these TrackingTokens is what will issue a Replay of events.
Knowing all these, the second step is to look at the methods the TrackingEventProcessor provides to 'reset tokens', which is threefold:
TrackingEventProcessor#resetTokens()
TrackingEventProcessor#resetTokens(Function<StreamableMessageSource, TrackingToken>)
TrackingEventProcessor#resetTokens(TrackingToken)
Option one will reset your tokens to the beginning of the event stream, which will thus replay everything.
Option two and three however give you the opportunity to provide a TrackingToken.
Thus, you could provide a TrackingToken starting from several points on the Event Stream. So, how do you go about to creating such a TrackingToken at a specific point in time? To that end, you should take a look at the StreamableMessageSource interface, which has the following operations:
StreamableMessageSource#createTailToken()
StreamableMessageSource#createHeadToken()
StreamableMessageSource#createTokenAt(Instant)
StreamableMessageSource#createTokenSince(Duration)
Option 1 is what's used to create a token at the start of the stream, whilst 2 will create a token at the head of the stream.
Option 3 and 4 will however allow you to create a token at a specific point in time, thus allowing you to replay all the events since the defined instance up to now.
There is one caveat in this scenario however. You're asking to replay an Aggregate. From Axon's perspective by default the Aggregate is the Command Model in a CQRS set up, thus dealing with Commands going in to your system. In the majority of the applications, you want Commands (e.g. the requests to change something) to occur on the current state of the application. As such, the Repository provided to retrieve an Aggregate does not allow specifying a point in time.
The above described solution in regards to replaying is thus solely tied to Query Model creation, as the TrackingEventProcessor is part of the Event Handling side in your application most often used to create views. This idea also ties in with your questions, that you want to know the "state of the Account Aggregate" at a given point in time. That's not a command, but a query, as you have 'a request for data' instead of 'the request to change state'.
Hope this helps you out #Safe!

Is really safety to use PATCH based on array index?

For instance if we (as client app) retrieve a Patient with one array of contacts and now we send to the fhir server a PATCH request to modify some of the info for some of the contact... the only way we sawto indicate it is using the position. Example : Patient.contact[1].gender. Thats only one example.
I think that approach (using array position) is not safety because services are not stateful and besides, no always the server are returning the same order for the same array (its no makes sense to suppose we are reciving the contact list ordered) so the server could change the wrong contact (in this case or to be more dangerous/unsafe situation if we use clinical resources).
I'm wrong ? There is another more safety approach of using PATCH without penalize the performance?
For a JSON Patch, you could use a "test" operation if you had a value within the array that can be relied upon. The patch operation as a whole is required to fail if the test fails: http://jsonpatch.com/#test
For XML Patch, I believe you may be able to do something similar with selectors? https://www.rfc-editor.org/rfc/rfc5261#section-4.1 - again, it depends on what you're trying to update.
I also agree with others that you should only attempt to patch if the version matches. There are very few updates that should be made to clinical data in a version-blind manner.
Servers are supposed to retain order. Not all servers will, but servers who don't probably won't be able to support PATCH. If you wish, feel free to submit a change request and we can highlight that in the specification.
Thanks so much for your clarification. Sure, we will request a change, at least in the documentation for highliting this requirement (server have to mantain the order).
But What do you mean exactly with "order"?? For instance, Meanwhile the appclient1 retrived the Patient with 3 contacts (Andrew,Bob,Dukhan) and send a patch for [2] (Dukhan), but during this time any other system (appclient2) has added a new contact (Carl) .. now the list (on server side) will be Andrew (0), Bob (1), Carl(2) and Dukhan (3).... so when PATCH request for dukhan is received on the server from the initial appclient1 the position [2] just now is not Dukhan , is Carl. So we continue with the same unsafe situation.

Resources