I have an issue with MvxRecyclerView and HttpClient.SendAsync
Specifically, at the point HttpClient.SendAsync receives the HttpResponseMessage back from the server, my MvxRecyclerView becomes jerky and unresponsive for a few seconds. As i allow synchronisation of data with the server constantly whilst the user is inputting information, this becomes a fairly frustrating issue.
My MvxRecyclerView is relatively complex, with potentially 30 different templates in the MvxTemplateSelector. However, the majority of the time the MvxRecyclerView performs flawlessly, displaying the correct templates and scrolling smoothly. If i enable Airplane Mode on the device, thus preventing server synchronisation from taking place, i have no issues at all.
Through trial-and-error, i have pinpointed the exact point at which it becomes unresponsive, and it appears to be the LoadIntoBufferAsync method from the HttpClient mono android base class
https://github.com/mono/mono/blob/22da2fe31ef898e7021fad59521ee0b329d37b07/mcs/class/System.Net.Http/System.Net.Http/HttpContent.cs#L137
If I cut down the complexity of the MvxItemTemplate and only return one Template regardless of the Item type, then i again have no issues at all.
This appears to be a perfect storm between the processing of the item template as the MvxRecyclerView is being scrolled, and the point at whichthe HttpResponseMessage is received back onto the mobile device.
I have tried using ConfigureAwait(false) on the HttpClient.SendAsync call, and also running the Http Operations in an Android Service and on a different thread using Task.Run, but there is always an issue at the point of data return regardless.
I do some fairly heavy database processing with the data returned from the HttpResponseMessage and this does not affect the usage of the MvxRecyclerView at all.
Is this something anyone else has encountered with the HttpClient? Is there a way to ensure the UI Thread is never blocked using SendAsync?
Related
Comparing Blazor Server with Blazor WebAssembly, I find one striking difference that might/should have a great impact on the application design. Let me outline that difference on the basis of a simple requirement:
Clicking a button to increase a counter and whenever the counter reaches 10, the button will be disabled.
Let's compare:
Blazor WebAssembly: The click event handler executes synchronously directly in the browser right after the user has clicked the button. The counter is increased and the condition gets checked. If the counter is 10, the button will be disabled using property binding. There is no way the user could have clicked an eleventh time.
Blazor Server: The click event handler executes asynchronously on the server, only after the click event has been published to the channel and received by the server. This might take an arbitrary amount of time, depending on the internet connection. The counter gets increased on the server and the condition gets checked. However, before the browser gets informed about the potential state change of the button, the user can click the button one or many more times, causing the event handler on the server to be executed again, potentially increasing the counter more than is allowed.
As a programmer using Blazor Server, I can no longer rely on the synchronous uninterrupted handling of user interaction events that I may be used to in JavaScript applications. I have to clutter all my event handlers with additional sanity checks just to make sure nothing unexpected happens. This also prevent a straightforward migration from Blazor WebAssembly to Blazor Server (and frankly, vice-versa).
Note that I am not convinced by the argument that the latency is so small that no real user is able to click again before the GUI is updated. Even for very controlled network conditions delays might happen unexpectedly. Also, it is never a good choice to rely on chance and external factors. Remember Murphy's Law?
My question is: How do you deal with this in practice? Do you bother writing all these sanity checks in your event handlers?
I concluded that in order to treat Blazor WebAssembly and Blazor Server equally at the source code level, you have to apply certain patterns and restrictions for yourself, as otherwise the runtime behaviors might differ.
One of these things is the input delay happening in Blazor Server. Even if it is – or should be – small, you still have to address it if you prefer to prevent nasty bugs when switching from WebAssembly to Server.
One way to address it is to guard each and every event handler (that reacts to a user interaction event) with code assuring that the state of the frontend is indeed such that this handler can safely be executed. In a "traditional" SPA web application (like Angular) you should guard against undesired operations on the server-side anyway, so do you in Blazor applications. However, in a Blazor application, you also additionally have to guard your frontend much more tightly, all because you can not rely on uninterrupted synchronous handler execution.
In my opinion, Blazor is a leaky abstraction, as the decision between WebAssembly and Server is not merely an implementation detail.
I'd disable all buttons on click via javascript and enable them after save
I have an app which needs almost no user interaction, but requires Geofences. Can I run this entirely within a background service?
There will be an Activity when the service is first run. This Activity will start a service and register a BroadcastReceiver for BOOT_COMPLETED, so the service will start at boot. It's unlikely that this Activity will ever be run again.
The service will set an Alarm to go off periodically, which will cause an IntentService to download a list of locations from the network. This IntentService will then set up Geofences around those locations, and create PendingIntents which will fire when the locations are approached. In turn, those PendingIntents will cause another IntentService to take some action.
All this needs to happen in the background, with no user interaction apart from starting the Activity for the first time after installation. Hence, the Activity will not interact with LocationClient or any location services.
I've actually got this set up with proximityAlerts, but wish to move to the new Geofencing API for battery life reasons. However, I have heard that there can be a few problems with using LocationClient from within a service. Specifically, what I've heard (sorry, no references, just hearsay claims):
location client relies on ui availability for error handling
when called from background thread, LocationClient.connect() assumes that it is called from main ui thread (or other thread with event looper), so connection callback is never called, if we call this method from service running in background thread
When I've investigated, I can't see any reason why this would be the case, or why it would stop my doing what I want. I was hoping it would be almost a drop-in replacement for proximityAlerts...
Can anyone shed some light on things here?
The best thing would be to just try it out, right? Your strategy seems sound.
when called from background thread, LocationClient.connect() assumes that it is called from main ui thread (or other thread with event looper), so connection callback is never called, if we call this method from service running in background thread.
I know this to be not true. I have a Service that is started from an Activity, and the connection callback is called.
I dont know about proximity alerts; but I cant seem to find an API to list my GeoFences. I am worried that my database (sqlite) and the actual fences might get out of sync. That is a design flaw in my opinion.
The reason LocationClient needs UI, is that the device may not have Google Play Services installed. Google has deviced a cunning and complex mechanism that allows your app to prompt the user to download it. The whole thing is horrible and awful in my opinion. Its all "what-if what-if" programming.
(They rushed a lot of stuff out the door for google IO 2013. Not all of it are well documented, and some of it seems a bit "rough around the edges").
I have a site that is moving incredibly slowly right now. Both Safari's inspector and Firebug are reporting that most of the load time is due to latency. The actual download is happening in less than a second. There's a lot of database activity in play (though the metrics on that indicate that it's pretty healthy), but what else can cause really high latency? Is it a purely network thing or are there changes I can make to the app to improve the latency numbers?
I'm using YSlow to help identify performance improvements, but on the whole, I don't see it reporting anything that seems crazy unreasonable. Opportunities for improvement, certainly, but nothing that seems like it would cause the huge load times I'm seeing.
Thanks.
UPDATE
Some background and metrics, in case it's useful. This is a CakePHP application and I'm using my UsersController::login action as the benchmark. For the sake of identifying how much of a factor the application code plays in this, I've printed a stacktrace immediately upon entering UsersController::beforeFilter(). Here's output:
UsersController::beforeFilter() - APP/controllers/users_controller.php, line 13
Controller::startupProcess() - CORE/cake/libs/controller/controller.php, line 522
Dispatcher::_invoke() - CORE/cake/dispatcher.php, line 187
Dispatcher::dispatch() - CORE/cake/dispatcher.php, line 171
[main] - APP/webroot/index.php, line 83
Load times, as shown by Safari's inspector range from 11.2 seconds to 52.2 seconds. This would seem to point me away from the application code and maybe something with my host, but maybe I'm completely misinterpreting this or oversimplifying it?
If you cannot identify directly a slow moving component of your application, there are a number of other steps along the way that can certainly slow your site down. Whenever I'm experiencing unusually long polling, I typically start by looking at the local DNS and then onto my hosted DNS. Sometimes a cache refresh (on their part, not yours) can cause a lot of polling until their database has caught up.
Else, they might actually have a service outage and your requests are being made to their secondary or backup server. If everything seems fine in terms of domain resolution, your hosting provider might be experiencing a service outage that can take a number of different shapes like serving static content from their backups or over-allocating shared resources until everything is running as it should. You can experience a ton of what they call throttling on shared cloud architectures when they have a box go down. On the plus side, you don't have a total outage in this circumstance.
One time, and this was just in a shared grid configuration, I had a processor go to hell. The bizarre part of it was that static content was still serving from a backup, but it was still polling against our database (which was on a different server) and causing our account to throttle because of over allocation on the backup. Wasn't our fault, but the host started sending nasty emails about our excessive long-polls. Moral of the story is, if it's not your application, and it's out of the blue, somewhere along the line I'll bet you'll find some hardware failure or misconfiguration.
Also now that I think of it, if you are syndicating some outside content (be it server or browser side) it might not be in your chain of responsibility altogether. If you are serving ads for example from a subscriber service, they might be having a high-load period or outage. These are just the steps that I would take to narrow down the culprit.
Probably this will be not the solution for you, but when I has doggy slow safari (and FF too) I simply changed the DNS servers to opendns (208.67.222.222, 208.67.220.220) and all my problems are resolved.
I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.
This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.