Change in ANR reporting - google-play

On September 1st we saw a significant increase in ANRs as reported on the Google Play Console. We've been struggling to figure out what may be causing them since we did not have any new releases around that time.
Today, looking in the Statistics section we saw a message that says:
August 30, 2021 17:00 Change in reporting data
On this date, we made a change to the way ANRs are reported to include ANRs without stack traces. As a result you may see an increase in the number of ANRs reported. These ANRs were already included in your ANR rate, so this metric is not affected.
However, we cannot find any more information about this. Does anyone have more information about this change that can help us understand the significant impact on our ANRs?
Thanks!

Related

Lost Duration while Debugging Apex CPU time limit exceeded

I'm open to posting the code in this section to work through the optimization but its a bit length and complex, so instead I'm hoping that somebody can assist me with a few debugging questions I have. My goal is to find out what is causing my Apex CPU Time Limit Exceeded issue.
When using the Debug Log in its basic or normal layout I receive the message
Maximum CPU Time: 15062 out of 10,000 ** Close to Limit
I've optimized and re-wrote various loops and queries several times now and in each case this number concludes around there which leads me to believe it is lying to me and that my actual usage far exceeds that number. So on my journey I switched the Log Panels of the Developer Console to Analysis in hopes of isolating exactly what loop, method, or area of the code is giving me a headache.
This leads me to my main question and problem.
Execution Tree, Performance Tree & Executed Units
All show me that my durations UNDER the 10,000ms allowance. My largest consumption is 3,556.19ms which is being used by a wrapper class I created and consumed in the constructor method where there is a fair amount of logic that is constructing a fairly complicated wrapper class that spans over 5-7 custom objects. Still even with those 3,000ms the remainder of the process shows at negligible times bringing my total around 4,000ms. Again my question is.... Why am I unable to see or find what is consuming all my time?
Incorrect Iteration Data
In addition to this, on the Performance tree there is a column of data that shows the number of iterations for each method. I know that my Production Org has 81 objects that would essentially call the constructor for my custom wrapper object. I.E. my Constructor SHOULD be called 81 times, but instead it is called 32 times. So my other question is can I rely on the iteration data in the column? Or because it was iterating so many times does it stop counting at a certain point? Its possible that one of my objects is corrupted or causing an infinite loop somehow, but I don't want to dig through all the data in search of that conclusion if its a known issue that the iteration data is not accurate anyway.
System.Debug in the Production org
The Last question is why my System.Debug() lines are not displaying in my Developer Console on the production org. I've added serveral breadcrumbs throughout the code that would help me isolate just which objects are making it through and which are not, however, I cannot in any layout view system.debug messages outside of my Sandbox.
Sorry for the wealth of questions but I did want to give an honest effort to better understand the debugging process in Salesforce. If this is a lost cause I'm happy to start sharing some code as well but hopefully some debugging tips can get me to the solution.
It's likely your debug log got truncated, see "Each debug log must be 20 MB or smaller. If it exceeds this amount, you won’t see everything you need." in https://trailhead.salesforce.com/en/content/learn/modules/apex_basics_dotnet/debugging_diagnostics
Download the log and search for text similar to "skipped 123456 bytes of detailed log" to confirm, some system.debug statements will just not show up.
You might have to fine-tune the log levels (don't log validation rules and workflows? don't log every single variable assignment with "FINE" level etc). You might have to set all flags to NONE, then track only 1 particular class/trigger that you suspect (see https://help.salesforce.com/articleView?id=code_debug_log_classes.htm&type=5 and https://salesforce.stackexchange.com/questions/214380/how-are-we-supposed-to-use-debug-logs-for-a-specific-apex-class-only)
If it's truncated it's possible analysis tools give up (I had mixed luck with console to be honest, sometimes https://apextimeline.herokuapp.com/ is great to give overview - but it'll also fail to parse a 20 MB log...
When all else fails you can load up the log into Notepad++ (or any editor of your choice), find lines related to method entry/method exit (you might need a regular expression search), take these filtered lines tor excel, play with "text to columns" and just look at timing manually, see if there's a record that causes the spike. Because it could be #10 that's the problem, the fact it exhausts limits on #32 of 81 doesn't mean much. Search like [METHOD_ENTRY|METHOD_EXIT]MyTriggerHandler.onBeforeUpdate could be a good start. But first thing is to make sure log is not truncated.

BigQuery JDBC Nanos Negative

When trying to run a fairly basic query using the driver provided by Simba, I'm running into issues where the "nanosecond" value is negative, causing IllegalArgumentException.
When writing a simple query that returns a Timestamp value, what comes back is an epoch value that is initially stored in a Double. Going through and debugging for example, I can see that the value coming back from the query is "1.498571169792E9". This corresponds to a timestamp of "Tuesday, June 27, 2017 1:46:09.792 PM" according to epochconverter.com, which is exactly what it should be.
Continuing to step through the code, we eventually try to use BQCoreUtils.convertUnixToTimestamp(). Now, while I've tried to disassemble the class (thanks IntelliJ!), I can't quite figure out what's going on. It eventually tries to create a new TimestampTz() which is an extension of java.sql.Timestamp, but the value getting passed for nanos is negative. This of course prompts Java to throw an IllegalArgumentException, but I can't figure out what I need to do to avoid this.
I also don't know if there's a simpler explanation for what's going on. Ultimately though, it appears that there's a driver bug. BQCoreUtils.convertUnixToTimestamp doesn't properly safeguard against the nanos calculation being non-negative.
The dumb question then is: has anyone else experience issues querying BigQuery where Timestamp values are triggering exceptions?
Update: Since this is happening in Timestamp created by JDBC driver, it does appear to be a bug in JDBC driver itself. Please file it under https://issuetracker.google.com/issues?q=componentid:187149.
Original:
The timestamp resolution in BigQuery is microseconds, and it looks like the value you are providing is in seconds, so you should multiply it by 1000000.
With reference to Google issue tracker,
This should be resolved with versions newer than 1.1.1 of the drivers, which also addressed other timestamp-related issues.
If any issue persists, please report at Google issue tracker they will re-open to examine.

SonarQube issue count is different depending on which dashboard I use

I'm using SonarQube 5.3 and it seems that the issue count is different depending on the view I use.
Consider this pic:
if I look in Dashboards -> Issues I see
the numbers on the top left
if I click the grand total (267,877) I end up in the Issues dashboard where I see totally different numbers (bottom right)
Even on the main dashboard I see conflicting data (pic)
Why don't the numbers match? Am I missing something?
There is a difference between Measures and queries run on Issues : measures are collected during analysis time and stay like this until the next analysis. Queries on Issues are updated in real-time according to the changes you do on Issues.
From what I see, we can suppose the 267K Issues is correct and you have some trouble in your SearchServer stack preventing it to be up to date.
You have to check in your sonar.log for ElasticSearch errors and be sure to have enough free disk space available on SQ_HOME/data/es to store and update your Issues.
What you can also do to confirm this, is to stop your SQ Server, clean the data/es directory and restart your SQ. Data should be consistent after that.

Making sense out of VS2010 Coded UI Debug Trace

I'm working with VS 2010 CUIT projects and have run into some issues that I'm having a hard time understanding. Namely 2 things are causing me troubles:
In the Debug Trace I get messages of the type: "PERF WARNING: FindAllDescendents took XXXX ms. Expected it to take maximum 500 ms". I understand what the warning means, but I can't always (easily) determine which query is causing the issue. Is there a way to add more information to the debug trace that would include the information I'm looking for?
I also see messages like this one: "PERF WARNING: CacheQueryId: took XXX ms. Expected it to take maximum 100 ms." I can't figure out what the warning really means or if there's anything that can be done to "fix" it.
Thanks.
After a lot of searching I found across this post that explains how to increase the level of details in debug trace. I added the following registry keys:
[HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\EnterpriseTools\QualityTools\Diagnostics]
"EnableTracing"=dword:00000001
"TraceLevel"=dword:00000004
and can now see the information about which control exactly is taking a long time to find. Just a warning though - there's a lot of information in the trace now so it's much harder to sift through it.
Still looking for an answer to my section question, or in general a list of warnings in the debug trace and what they mean.

Feedburner awareness API 0 circulation

The feedburner awareness API seemed to be working fine till last night but its not working right now. Its not even down but returning 0 for every site. I wonder if there is something I am missing or have the removed this functionality or something.
https://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=http://feeds.feedburner.com/RandomGoodStuff
However if I give dates with feedburner it gives the values.
https://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=http://feeds2.feedburner.com/RandomGoodStuff&dates=2008-01-01,2008-04-02
Anyone knows what is going on? I tried looking for any change in the API but didn't find any. Neither could I find a way to ask google about it.
According to the Awareness API documentation, circulation — an approximate measure of the number of individuals for whom your feed has been requested in the 24 hour period described by date. So just add &dates=YYYY-MM-DD, where date is a day before yesterday. This way you will always get the result, fresh and grater than zero.

Resources