I'm currently working on a Xamarin pcl project in visual studio 2017 with the platforms UWP and Android. Until a month ago the building of the project goes fast but nu each time I change something it takes about 100 seconds before it completes its build.
My architecture:
I have one PCL project with UWP and Android.
This project has 4 Library projects as dlc's.
1 of the libs has another lib as dlc.
It doesn't matter where my change is, it always takes around 100 seconds before it builds.
Build your project(s) with MSBuild PerformanceSummary or Diagnostic level logging and at the end of the build log you will receive two performance summaries; Target and Task. From there you will be able to focus on want actually is taking the most time...
i.e.
Target Performance Summary:
~~~~
117 ms _ResolveLibraryProjectImports 1 calls
229 ms _CollectAdditionalResourceFiles 1 calls
271 ms _ResolveAssemblies 1 calls
360 ms _SetLatestTargetFrameworkVersion 1 calls
362 ms _CopyIntermediateAssemblies 1 calls
422 ms _CopyMdbFiles 1 calls
437 ms _CreateBaseApk 1 calls
441 ms _CreateAdditionalResourceCache 1 calls
518 ms _GenerateJavaStubs 1 calls
570 ms _LinkAssembliesNoShrink 1 calls
602 ms _UpdateAndroidResgen 1 calls
~~~~
Task Performance Summary:
~~~~
359 ms ResolveSdks 1 calls
381 ms CreateItem 181 calls
437 ms CreateAdditionalLibraryResourceCache 1 calls
495 ms GenerateJavaStubs 1 calls
519 ms Copy 9 calls
567 ms LinkAssemblies 1 calls
1134 ms Csc 1 calls
1915 ms Aapt 3 calls
2097 ms Javac 1 calls
~~~~
Re: https://developer.xamarin.com/guides/android/troubleshooting/troubleshooting/
Related
I am still confused with some JMeter logs displayed here. Can someone please give me some light into this?
Below is a log generated by JMeter for my tests.
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary + 1 in 00:00:02 = 0.5/s Avg: 1631 Min: 1631 Max: 1631 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 218 in 00:00:25 = 8.6/s Avg: 816 Min: 141 Max: 1882 Err: 1 (0.46%) Active: 10 Started: 27 Finished: 17
summary = 219 in 00:00:27 = 8.1/s Avg: 820 Min: 141 Max: 1882 Err: 1 (0.46%)
summary + 81 in 00:00:15 = 5.4/s Avg: 998 Min: 201 Max: 2096 Err: 1 (1.23%) Active: 0 Started: 30 Finished: 30
summary = 300 in 00:00:42 = 7.1/s Avg: 868 Min: 141 Max: 2096 Err: 2 (0.67%)
Tidying up ... # Fri Jun 09 04:19:15 IDT 2017 (1496971155116)
Does this log means [ last step ] 300 requests were fired, 00.00:42 secs took for the whole tests, 7.1 threads/sec or 7.1 requests/sec fired?
How can i make sure to increase the TPS? Same tests were done in a different site and they are getting 132TPS for the same tests and on the same server. Can someone put some light into this?
In here, total number of requests is 300. Throughput is 7 requests per second. These 300 requests generated by your given number of threads in Thread group configuration. You can also see the number of active threads in the log results. These threads become active depend on your ramp-up time.
Ramp-up time is the speed at which users or threads arrive on your application.
Check this for an example: How should I calculate Ramp-up time in Jmeter
You can give enough duration in your script and also check the loop count forever, so that all of the threads will be hitting those requests in your application server until the test finishes.
When all the threads become active on the server, then they will hit those requests in server.
To increase the TPS, you must have to increase the number of threads because those threads will hit your desired requests in server.
It also depends on the response time of your requests.
Suppose,
If you have 500 virtual users and application response time is 1 second - you will have 500 RPS
If you have 500 virtual users and application response time is 2 seconds - you will have 250 RPS
If you have 500 virtual users and application response time is 500 ms - you will have 1000 RPS.
First of all, a little of theory:
You have Sampler(s) which should mimic real user actions
You have Threads (virtual users) defined under Thread Group which mimic real users
JMeter starts threads which execute samplers as fast as they can and generate certain amount of requests per second. This "request per second" value depends on 2 factors:
number of virtual users
your application response time
JMeter Summarizer doesn't tell the full story, I would recommend generating the HTML Reporting Dashboard from the .jtl results file, it will provide more comprehensive load test result data which is much easier to analyze looking into tables and charts, it can be done as simple as:
jmeter -g /path/to/testresult.jtl -o /path/to/dashboard/output/folder
Looking into current results, you achieved maximum throughput of 7.1 requests second with average response time of 868 milliseconds.
So in order to have more "requests per second" you need to increase the number of "virtual users". If you increase the number of virtual users and "requests per second" is not increasing - it means that you identified so called saturation point and your application is not capable of handling more.
Why would terminal>traceroute #.#.#.# show different results than using the network utility.app? Here is the first 3 hops. I am connected to PIA VPN but regardless both methods should show the same results I would think.
termnial
traceroute to 8.8.8.8 (8.8.8.8), 64 hops max, 52 byte packets
1 10.199.1.1 (10.199.1.1) 42.559 ms 39.696 ms 38.293 ms
2 * * *
3 184-75-211-129.amanah.com (184.75.211.129) 49.639 ms
162.219.176.225 (162.219.176.225) 56.780 ms
dpaall.webexpressmail.net (162.219.179.65) 69.798 ms
netutil.app
traceroute to 8.8.8.8 (8.8.8.8), 64 hops max, 72 byte packets
1 10.199.1.1 (10.199.1.1) 41.221 ms 38.355 ms 47.237 ms
2 vl685-c8-10-c6-1.pnj1.choopa.net (209.222.15.225) 41.262 ms 38.674 ms 41.912 ms
3 vl126-br1.pnj1.choopa.net (108.61.92.105) 44.092 ms 36.200 ms 40.407 ms
I'm analyzing a dump file taken with procdump -ma w3wp on a Windows Server 2008 R2 SP1 machine running .NET 4.
0:000> !ASPXPages
Going to dump the HttpContexts found in the heap.
Loading the heap objects into our cache.
HttpContext Timeout Completed Running ThreadId ReturnCode Verb RequestPath+QueryString
0x0353f65c 110 Sec no 1429 Sec XXX 200 GET /Nav/ResTry.aspx qs1
0x03545a18 110 Sec yes XXX 302 GET /Nav/
0x0354f26c 110 Sec no 1366 Sec XXX 200 GET /Nav/ResTry.aspx te
0x0355a45c 110 Sec yes XXX 200 POST /Service/ResInhId_68022569!
0x035d8454 110 Sec yes XXX 302 POST /Service/ResIntf.ashx act
0x035e1268 110 Sec no 1213 Sec XXX 200 GET /Nav/ResTry.aspx te
0x12e77088 110 Sec no 6 Sec XXX 200 GET /Nav/Activities.mvc/Index/2/7
0x12e85b10 110 Sec no 5 Sec 215 200 GET /service/Ressaveresult.aspx even
0x12e89cb8 110 Sec no 5 Sec XXX 200 GET /Nav/Activities.mvc/Index/2/5 topicid=1
0x12ed5038 110 Sec no 4 Sec XXX 200 GET /Nav/Ressave.aspx e
0x12ed9dc0 110 Sec yes XXX 302 GET /Nav/DoItem.aspx ItemId=71937319
There's over 70 threads here but I trimmed the output.
Why do most of the ThreadId's not show up and appear as XXX? If I use
!threads
I see almost every ID but given that's missing the page names it's a bear to find out what they are doing. The threads aren't marked completed and I don't believe they're truly dead even though that's what the XXX allegedly means. When I looked at the currently executing requests in IIS many of these pages showed up.
If I run
!threadpool
I do see dozens of threads running even though I only see a handful of ThreadId's without XXX which enforces the point that they're not dead but somehow WinDbg or psscor4 is not loading the ThreadId properly.
Another question is why these weren't sent a Thread.Abort by IIS and ran past their specified timeout. Is it possible the thread that acts as the grim reaper was also delayed by the high CPU issue on the machine? Can we verify this in windbg and identify this special thread somehow?
ThreadId is usually XXXX for completed requests. They are still in memory because they have not been garbage collected yet. However it is strange, that you have XXXX also for requests where Completed=no. Try runing ~*e !clrstack to get stacks for all managed threads. In such a way. you will exactly see what is going on at the moment of dump.
I recently got a new cable modem from my ISP (Rogers in Canada; old modem was a "Webstar" something, the new modem is a "SMC D3GN-RRR"). Since I got the new modem, it feels like my internet access is slower.
What I'm perceiving is sometimes when I enter a URL and hit enter, there is a delay -- a slight dealy, but it lasts half a second to two or three seconds -- before the web page loads. Once the web page starts loading it loads fast, but there's that delay during while it's looking it up or something.
I have a MacBook Pro, an Apple Airport Extreme wireless router, the new cable modem.
Is there some kind of tool, or cool UNIX command (traceroute, or something?) I can run see how much time is takes to jump from device to device, so I can "prove" where the delay is?
Just FYI, here's a "traceroute www.google.com", in case it's useful. I don't know what this means. :)
traceroute www.google.com
traceroute: Warning: www.google.com has multiple addresses; using 173.194.75.105
traceroute to www.l.google.com (173.194.75.105), 64 hops max, 52 byte packets
1 10.0.1.1 (10.0.1.1) 4.455 ms 1.204 ms 1.263 ms
2 * * *
3 * 69.63.255.237 (69.63.255.237) 36.694 ms 30.209 ms
4 69.63.250.210 (69.63.250.210) 44.503 ms 41.303 ms 46.039 ms
5 gw01.mtmc.phub.net.cable.rogers.com (66.185.81.137) 40.504 ms 34.937 ms 44.493 ms
6 * * *
7 * 216.239.47.114 (216.239.47.114) 58.605 ms 37.710 ms
8 216.239.46.170 (216.239.46.170) 56.073 ms 57.250 ms 64.373 ms
9 72.14.239.93 (72.14.239.93) 70.879 ms
209.85.249.11 (209.85.249.11) 114.399 ms 59.781 ms
10 209.85.243.114 (209.85.243.114) 72.877 ms 80.151 ms
209.85.241.222 (209.85.241.222) 82.524 ms
11 216.239.48.159 (216.239.48.159) 82.227 ms
216.239.48.183 (216.239.48.183) 80.065 ms
216.239.48.157 (216.239.48.157) 79.660 ms
12 * * *
13 ve-in-f105.1e100.net (173.194.75.105) 76.967 ms 71.142 ms 80.519 ms
Same problem for me. Download and upload numbers are great for me. Looks like the DNS, in my case, is responding extremely slowly (not due to slow RTT or line speeds, but maybe the DNS server itself is overloaded, or perhaps the DNS in this region is overloaded or under some form of attack). If I'm correct, it isn't trivial for us as "customers" to demonstrate that this is the explanation, or to have it fixed, unfortunately.
I have experimented quite a bit with CDN from Azure, and I thought i was home safe after a successful setup using a web-role.
Why the web-role?
Well, I wanted the benefits of compression and caching headers which I was unsuccessful obtaining using normal blob way. And as an added bonus; the case-sensitive constrain was eliminated also.
Enough with the choice of CDN serving; while all content before was served from the same domain, I now serve more or less all "static" content from cdn.cuemon.net. In theory, this should improve performance since browsers parallel can spread content gathering over "multiple" domains compared to one domain only.
Unfortunately this has lead to a decrease in performance which I believe has to do with number of hobs before content is being served (using a tracert command):
C:\Windows\system32>tracert -d cdn.cuemon.net
Tracing route to az162766.vo.msecnd.net [94.245.68.160]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 192.168.1.1
2 21 ms 21 ms 21 ms 87.59.99.217
3 30 ms 30 ms 31 ms 62.95.54.124
4 30 ms 29 ms 29 ms 194.68.128.181
5 30 ms 30 ms 30 ms 207.46.42.44
6 83 ms 61 ms 59 ms 207.46.42.7
7 65 ms 65 ms 64 ms 207.46.42.13
8 65 ms 67 ms 74 ms 213.199.152.186
9 65 ms 65 ms 64 ms 94.245.68.160
C:\Windows\system32>tracert cdn.cuemon.net
Tracing route to az162766.vo.msecnd.net [94.245.68.160]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 192.168.1.1
2 21 ms 22 ms 20 ms ge-1-1-0-1104.hlgnqu1.dk.ip.tdc.net [87.59.99.217]
3 29 ms 30 ms 30 ms ae1.tg4-peer1.sto.se.ip.tdc.net [62.95.54.124]
4 30 ms 30 ms 29 ms netnod-ix-ge-b-sth-1500.microsoft.com [194.68.128.181]
5 45 ms 45 ms 46 ms ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.10]
6 87 ms 59 ms 59 ms xe-3-2-0-0.fra-96cbe-1a.ntwk.msn.net [207.46.42.50]
7 68 ms 65 ms 65 ms xe-0-1-0-0.zrh-96cbe-1b.ntwk.msn.net [207.46.42.13]
8 65 ms 70 ms 74 ms 10gigabitethernet5-1.zrh-xmx-edgcom-1b.ntwk.msn.net [213.199.152.186]
9 65 ms 65 ms 65 ms cds29.zrh9.msecn.net [94.245.68.160]
As you can see from the above trace route, all external content is delayed for quite some time.
It is worth noticing, that the Azure service is setup in North Europe and I am settled in Denmark, why this trace route is a bit .. hmm .. over the top?
Another issue might be that the web-role is two extra small instances; I have not found the time yet to try with two small instances, but I know that Microsoft limits the extra small instances to a 5Mb/s WAN where small and above has 100Mb/s.
I am just unsure if this goes for CDN as well.
Anyway - any help and/or explanation is greatly appreciated.
And let me state, that I am very satisfied with the Azure platform - I am just curious in regards to the above mentioned matters.
Update
New tracert without the -d option.
Being inspired by user728584 I have researched and found this article, http://blogs.msdn.com/b/scicoria/archive/2011/03/11/taking-advantage-of-windows-azure-cdn-and-dynamic-pages-in-asp-net-caching-content-from-hosted-services.aspx, which I will investigate further in regards to public cache-control and CDN.
This does not explain the excessive hops count phenomenon, but I hope a skilled network professional can help in casting light to this matter.
Rest assured, that I will keep you posted according to my findings.
Not to state the obvious but I assume you have set the Cache-Control HTTP header to a large number so as your content is not being removed from the CDN Cache and being served from Blob Storage when you did your tracert tests?
There are quite a few edge servers near you so I would expect it to perform better: 'Windows Azure CDN Node Locations' http://msdn.microsoft.com/en-us/library/windowsazure/gg680302.aspx
Maarten Balliauw has a great article on usage and use cases for the CDN (this might help?): http://acloudyplace.com/2012/04/using-the-windows-azure-content-delivery-network/
Not sure if that helps at all, interesting...
Okay, after I'd implemented public caching-control headers, the CDN appears to do what is expected; delivering content from x-number of nodes in the CDN cluster.
The above has the constrain that it is experienced - it is not measured for a concrete validation.
However, this link support my theory: http://msdn.microsoft.com/en-us/wazplatformtrainingcourse_windowsazurecdn_topic3,
The time-to-live (TTL) setting for a blob controls for how long a CDN edge server returns a copy of the cached resource before requesting a fresh copy from its source in blob storage. Once this period expires, a new request will force the CDN server to retrieve the resource again from the original blob, at which point it will cache it again.
Which was my assumed challenge; the CDN referenced resources kept pooling the original blob.
Also, credits must be given to this link also (given by user728584); http://blogs.msdn.com/b/scicoria/archive/2011/03/11/taking-advantage-of-windows-azure-cdn-and-dynamic-pages-in-asp-net-caching-content-from-hosted-services.aspx.
And the final link for now: http://blogs.msdn.com/b/windowsazure/archive/2011/03/18/best-practices-for-the-windows-azure-content-delivery-network.aspx
For ASP.NET pages, the default behavior is to set cache control to private. In this case, the Windows Azure CDN will not cache this content. To override this behavior, use the Response object to change the default cache control settings.
So my conclusion so far for this little puzzle is that you must pay a close attention to your cache-control (which often is set to private for obvious reasons). If you skip the web-role approach, the TTL is per default 72 hours, why you may not never experience what i experienced; hence it will just work out-of-the-box.
Thanks to user728584 for pointing me in the right direction.