App Script Functions in Google Sheet Stuck At "Loading..." - performance

I've been experiencing a couple issues with Google Sheets & App Scripts lately, and I'm hoping someone can help me out. I'm going to contain this post to one topic / question, but I'm going to mention a couple of other problems that I'm having just for a greater context in case the other issues are related or may be causing the problem specific to this topic.
Problem: My custom app script functions in my Google Speadsheet are currently stuck at "Loading..." and at the time of writing this have been for over an hour. There are no logs of it being executed in MyExecution in my App Script Dashboard, nor are there any errors reported anywhere. These functions were working and have been working for the last couple months until today.
Details:
So, in my Google Sheet I'm using a custom method "findSeenCount". It's used quite a few times throughout a couple sheets, and though I don't think the logic is relevant here, I will say that its purpose was to perform a count on specific things too complex to chain the basic spreadsheet counts and conditions together. The function itself works fine, and has for several months now. However, today as I was working on a separate script (working title: newFunction), I started to notice every time I would save my script project or in the editor run -> newFunction, it would trigger all my findSeenCount functions in the sheet to run (as in it would get an entry in the Execution log), but on the sheet itself (where the calls actually were), it never actually went to recalculate. The return values stayed the same, it never changed to "Loading...", but there were executions clearly happening according to the dashboard. This was quite taxing, and strange as at the time I noticed this happening newFunction was just doing 2 simple requests to get some specific sheets, one of which was a sheet with some findSeenCount functions on it (though I've never had this issue before in some of my other functions).
function newFunction()
{
var attendanceSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Attendance");
// Sheet that contains findSeenCount.
var P1Sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("P1 Prio");
return;
}
At this point, my App Script dashboard -> My Executions page started to become pretty laggy, and eventually just started crashing. No errors were being reported and when the page would load I could see it completing the unwanted executions. It ran a couple hundred times as I was trying to work, which eventually I just removed all the findSeenCount function calls in the sheet entirely. After I did that, any function I tried to trigger would not work. In the sheet, if I called a function it would get stuck on "Loading..." and no execution records would show up on my dashboard -> My Executions. If I ran a function in the editor, it would run, the "running function" box would pop up, eventually go away, and that was it. Again, no execution record would be recorded in the dashboard -> My Executions and it would never actually return any results to the log "Waiting for logs, please wait..." if wrote in a log statement. I'm not seeing any errors anywhere, and I don't think I surpassed any sort of execution limit as I would expect to be told somewhere that I had. On top of this, I've also noticed that I have tons of executions that are recorded, I mean my application sees somewhat heavy traffic daily and has for the last several months. This alone seems to cause a little delay when simply just loading a page of the executions and I'm not sure if there's a way I can clear this list -- but that's getting off topic.
If anyone has any ideas on what's going on, or what I'm supposed to do about this, or where I can find any sort of logging that may be able to give me better details about what the problem is please let me know!
Thanks

A quick fix is to clear browser cache and cookies and clear the sheet and adding your formulas again.
A more reliable way is to ditch the custom formulas and use triggers or menus or buttons. Custom functions are unreliable at large scale.

Related

What could be the cause of slow performance in both confirm sale.order and validate stock.picking?

In one of my databases that has more than 10k products and thousands of contacts, sale orders, and inventory transfers, it has a weird slow performance every time the Confirm (action_sale_ok) button is clicked in sale.order, and the same slow performance also happens when processing Validate (button_validate) button in stock.picking.
The issue is that the page takes more than 10 seconds to finish the process while displaying a loading screen that disables the user to do anything.
What could be the cause of this problem?
I tried debugging in many places in stock.picking, and stock.move that relates to button_validate, but found nothing. It seems like it is something that is executed after the validation process that I can't trace to it -- maybe some depend methods.

Flow Triggering Itself(Possibly), Each run hits past IDs that were edited

I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))

How to get some Browser performance indication using testcafe

I've an application that have some room for performance improvements.
Our customer has requested some performance measurements on the Client (Browser) side,
and I'm trying to use testcafe to have some execution time indications.
One option is to have people accessing the different features, activating in Chrome
the development tools, and taking note of DOMContentLoaded values, too boring, error prone and time consuming.
Using testcafe we can do begin-end measurements, but because testcafe is loading
the pages through it's proxy is clear that this figures will be worst.
There are several questions
1. amount of delay added by the proxy:
does anybody have idea of something like a multiplier factor,
i.e.: times in testcafe will be -> X times the DOMContentLoaded you get from the developer console.
2. When to get Selector value from the page
I'm trying to do this:
S1 - access the page PageUnderTest
S2 - set filter values
S3 - click search to submit the page and apply the filters
S4 - the PageUnderTest is rendered with the filters applied.
Because I'm trying to get the time till the page is loaded,
I get BEGIN Timestamp before issuing t.click(button) (S3)
then I expect for the page title, but not knowing how testcafe works
I fear that testcafe get this value from S3 because the PageUnderTest is already
rendered.
Can anybody provide some clarifications?
I've a token that is changed on each submit then I'm getting the token in S3 (before the click)
and loop reading the token till the value is different to the value got in S3.
Do you think this is a good approach?
3- How to understand page has been fully rendered.
Do you have any suggestions?
Best regards
TestCafe is a tool built for functional testing, supporting you to write end-to-end tests which should replicate real user scenarios within your web application. Do not use it to perform non-functional testing (like performance or load testing). Such tests would not yield any conclusive results. You can read more about TestCafe's scope here
Try artillery for load testing or performance testing.
Also, if you want to measure the time it takes for an UI element to appear, you can build a counter, but those results will not be very accurate.
I used testcafe to do this:
Start a timer & click button X
Stop the timer when element Y appeared.
I wanted to see how long it takes for an UI element to appear, but this was not a valid test, because the UI wasn't slow, the API behind it was slow, that is when I gave Artillery a try.
I use Artillery + Testcafe for my tests. I'm a QA so I don't really know others.

Segmenting on users who have performed a behaviour not behaving as expected

I want to look at the effect of having performed a specific action sequence at any (tracked) time in the past on user retention and engagement.
The action sequence is that of performing an optional New User Flow.
This is signalled to Google Analytics via sending it appropriate events. That works fine. The events show up in reports as expected.
My problem is what happens to results when I used these events to create segments. I have tried two different ways of creating a segment based on this in Advanced Segmentations, via Conditions (defining the segment via the end event, filtered over users not sessions), and via Sequences (defining start and end events, again filtered over users not sessions).
What I get when I look at various retention/loyalty reports, using either of these segments, is ever so very clearly a result which is doing this segmentation within session, not across uses sessions. So for NUF completers , I am seeing all my loyalty/recency on Session 1, in which people are most likely to do the NUF, if they ever do it at all. This is not what I want. (Mind you it is something that could be really useful in other context, with another event! But not for the new user flow.)
What are my options for getting what I want? I see two possible ways forward:
Using custom dimensions, assigning a custom dimension value in the code when the New User Flow is completed. However I do not know if this will solve the cross-session persistence problem.
Injecting a UserID, which we do not currently do, and (somehow!) using the reports available when you inject a UserID to do this.
Are either of these paths plausible? Is there a better way forward? Is it silly to even try to do this in Google Analytics? I'm way more familiar with App Tracking solutions (e.g. Flurry, Mixpanel, DeltaDNA) which do this as a matter of course, than with Google Analytics, and the fact this is at the very least awkward in Google Analytics is coming a bit of a surprise.
thanks,
Heather

What is the reasoning for and the basic concepts behind an interstitial loading page?

I'm interested in finding out why this is used on some Web sites for processing user-initiated search submissions, how it affects the request and response flow, and programmatically why it would be necessary (or beneficial). In an MVC framework it seems difficult to execute since you are injecting another page into the middle of the flow.
EDIT:
Not advertising related. For instance, most travel sites used to do this, and there were no ads... some banking sites do it too, where there is just a loader that says something like "Please wait while we process your transaction...".
It is often used in long running requests to prevent the web server from timing out the request. With an interstitial page, you are able to continuously refresh the page until you get results back.
EDIT:
Also, for long running requests, it is beneficial to have a "Loading.." page in order to show the user that something is happening. Without the interstitial page, the request can appear to have hung up if it takes too long.
To supplement what HVS said, interstitials which appear before, say a homepage loads, are very much there for the purpose of advertising, we've all seen the 'close this ad' link.
One instance where they can be helpful from a user experience point of view is when a user initiates an action which requires feedback from a process which may take some time to respond - either because it's slow, busy or just has a lot of processing to do.
Think of a site where you book a flight online for example. You often get an interstitial on hitting 'find flights' because the the system is having to go off and ask for all relevant flight information and then sort them for you before displaying them on your screen. If this round-trip of 'request, interrogate, return, display' is likely to take an amount of time beyond that which a normal webpage transitions from one to the next, a UXDesigner may consider an interstitial screen (or message) to let the user know something is happening whilst at the same time allowing the system the time it needs to complete the request. Any screen with this sort of face-time is going to get the attention of your marketing department from a 'well while we've got them we might as well show them something' point of view.
As a UX Designer myself interstitials like this are not always preferred as I'd love every system to return data immediately but if it can't for whatever reason, I'm very much for keeping the user in the loop as much as possible about what is happening - rather than leaving them to stare at the browser status bar until they either try again or get fed up and leave.
One final point when considering this is also to have a lower and upper time limit on a screen like this. If you need to show an interstitial, show it for long enough so people can read it and understand it but not too long that they get fed up of waiting. As a rough guide, leave it open for at least 3-4 seconds (even if the process averages 4 seconds but has finished after 1 on this occasion). Between 4 and 10 seconds check every second to see if the process has responded (and then take the user to the next page f it has) and after 10 seconds seriously consider telling the user to either try again or telling them you've failed (whilst at the same time getting your tech team to fix what is ultimately a problem which will affect your bottom line).
I believe the vast majority of interstitial pages are there to run advertising.

Resources