I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))
Related
I'm working on a Power Automate flow where the flow is supposed to connect to Planner, get the tasks which are due tomorrow and send a message to an MS Teams Channel.
I got the entire flow working except of one thing - getting the names of the person(s) whom the task is assigned.
Here's my current flow:
Getting the list of Tasks from Planner,
Filter those which have a Due Date set,
Filter the ones which have the Due Date tomorrow,
Get the Name of the Task Creator using Get User Profile,
Send the data into an MS Teams Channel.
This all works perfectly fine. However I also need to get the names of the Assigned users. I understand that they come as an array. And when I try to add it to the same Get User Profile it's getting wrapped into an unnecessary "Apply to Each" which breaks everything.
Does anyone have a clue how this can be solved? Basically I need only 1 Assignee as we are not going to assign the same task to more than 1 person.
Any help would be much appreciated!
In order to avoid the loop (which is also a valid approach, long story) then you can use an expression to get the first (in case there are multiple) assigned user.
This is an example where you don't need to loop ...
... you can see I've initialised a (string) variable at the top which will hold the user ID GUID.
... then further down in the Set Assigned To operation, this is the expression I use ...
item()?['_assignments'][0]['userid']
That gets the first user and then the associated userid property. You can then pass that into the Get user profile (V2) task ...
Obviously, you need to adapt that to your flow but I hope that makes sense.
I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?
I've been experiencing a couple issues with Google Sheets & App Scripts lately, and I'm hoping someone can help me out. I'm going to contain this post to one topic / question, but I'm going to mention a couple of other problems that I'm having just for a greater context in case the other issues are related or may be causing the problem specific to this topic.
Problem: My custom app script functions in my Google Speadsheet are currently stuck at "Loading..." and at the time of writing this have been for over an hour. There are no logs of it being executed in MyExecution in my App Script Dashboard, nor are there any errors reported anywhere. These functions were working and have been working for the last couple months until today.
Details:
So, in my Google Sheet I'm using a custom method "findSeenCount". It's used quite a few times throughout a couple sheets, and though I don't think the logic is relevant here, I will say that its purpose was to perform a count on specific things too complex to chain the basic spreadsheet counts and conditions together. The function itself works fine, and has for several months now. However, today as I was working on a separate script (working title: newFunction), I started to notice every time I would save my script project or in the editor run -> newFunction, it would trigger all my findSeenCount functions in the sheet to run (as in it would get an entry in the Execution log), but on the sheet itself (where the calls actually were), it never actually went to recalculate. The return values stayed the same, it never changed to "Loading...", but there were executions clearly happening according to the dashboard. This was quite taxing, and strange as at the time I noticed this happening newFunction was just doing 2 simple requests to get some specific sheets, one of which was a sheet with some findSeenCount functions on it (though I've never had this issue before in some of my other functions).
function newFunction()
{
var attendanceSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Attendance");
// Sheet that contains findSeenCount.
var P1Sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("P1 Prio");
return;
}
At this point, my App Script dashboard -> My Executions page started to become pretty laggy, and eventually just started crashing. No errors were being reported and when the page would load I could see it completing the unwanted executions. It ran a couple hundred times as I was trying to work, which eventually I just removed all the findSeenCount function calls in the sheet entirely. After I did that, any function I tried to trigger would not work. In the sheet, if I called a function it would get stuck on "Loading..." and no execution records would show up on my dashboard -> My Executions. If I ran a function in the editor, it would run, the "running function" box would pop up, eventually go away, and that was it. Again, no execution record would be recorded in the dashboard -> My Executions and it would never actually return any results to the log "Waiting for logs, please wait..." if wrote in a log statement. I'm not seeing any errors anywhere, and I don't think I surpassed any sort of execution limit as I would expect to be told somewhere that I had. On top of this, I've also noticed that I have tons of executions that are recorded, I mean my application sees somewhat heavy traffic daily and has for the last several months. This alone seems to cause a little delay when simply just loading a page of the executions and I'm not sure if there's a way I can clear this list -- but that's getting off topic.
If anyone has any ideas on what's going on, or what I'm supposed to do about this, or where I can find any sort of logging that may be able to give me better details about what the problem is please let me know!
Thanks
A quick fix is to clear browser cache and cookies and clear the sheet and adding your formulas again.
A more reliable way is to ditch the custom formulas and use triggers or menus or buttons. Custom functions are unreliable at large scale.
I want to look at the effect of having performed a specific action sequence at any (tracked) time in the past on user retention and engagement.
The action sequence is that of performing an optional New User Flow.
This is signalled to Google Analytics via sending it appropriate events. That works fine. The events show up in reports as expected.
My problem is what happens to results when I used these events to create segments. I have tried two different ways of creating a segment based on this in Advanced Segmentations, via Conditions (defining the segment via the end event, filtered over users not sessions), and via Sequences (defining start and end events, again filtered over users not sessions).
What I get when I look at various retention/loyalty reports, using either of these segments, is ever so very clearly a result which is doing this segmentation within session, not across uses sessions. So for NUF completers , I am seeing all my loyalty/recency on Session 1, in which people are most likely to do the NUF, if they ever do it at all. This is not what I want. (Mind you it is something that could be really useful in other context, with another event! But not for the new user flow.)
What are my options for getting what I want? I see two possible ways forward:
Using custom dimensions, assigning a custom dimension value in the code when the New User Flow is completed. However I do not know if this will solve the cross-session persistence problem.
Injecting a UserID, which we do not currently do, and (somehow!) using the reports available when you inject a UserID to do this.
Are either of these paths plausible? Is there a better way forward? Is it silly to even try to do this in Google Analytics? I'm way more familiar with App Tracking solutions (e.g. Flurry, Mixpanel, DeltaDNA) which do this as a matter of course, than with Google Analytics, and the fact this is at the very least awkward in Google Analytics is coming a bit of a surprise.
thanks,
Heather
In my application, users have a list of items that they can put in any order they like. The database schema looks like this:
Items
+ Id : int
+ Name : string
+ Order : int
so when the user puts things in order, it sets the Order field accordingly, so that I can sort it later. Great.
Now, I want to make the sort ajax-y, such that the user can drag and drop items into order (and use up/down arrows), and it will just automagically save everything. (If you're familiar with Netflix, they do a similar thing.)
The issue I'm having is that in order to persist the user's changes as they make them, I will need to do an AJAX call every time they do something. If the user moves an item from position 10 to position 1, that implies that I have to update 10 records in that little ajax call. Meanwhile, the user may have queued up 3 more AJAX calls to update other records.
This seems inefficient and like it might be error prone (due to race conditions and so on, if the AJAX calls take a long time.) Should I be worrying about this? Is there a more efficient way to do this? If it makes a difference, I expect most users will have fewer than 5 items to sort.
Since Javascript can't synchronize code, I agree that it would be difficult to implement code that would be sure to avoid race conditions, although I did find this article on implementing a Mutex in Javascript.
However, personally I think that rather than choose an option that is likely to result in race conditions, I would go with one of the following options:
Create a save button above the items, that when clicked will save the order to the database.
Create a timer that will save the order every five seconds (or whatever), if something has changed. You would still want a save button for this, so the users could force a save.
I would lean towards the latter. Obviously in both cases you would want some visual cue to the users that they have unsaved changes (like changing the background color of the items, for instance). You would most likely want to implement something that makes sure the user wants to leave the page with unsaved changes if you go with either of those options (like in Gmail, when you have unsaved changes in an email that you are composing).