BackgroundTransferService - When does TransferProgressChanged get called? - windows-phone-7

When using BackgroundTransferService on windows phone 7, is there any way to control how often TransferProgressChanged is called? I'm guessing that it's related to the size of the internal buffer used for the download, but I don't see any properties for setting this.
The BytesReceived values are not equally spaced, or even the same for repeated downloads of the same file, but appear to be about every 1% of the total file size.
This is OK for a small file, but makes for a very unresponsive user interface when downloading large (2GB movie) files.

The BackgroundTransferService updates when the percentage downloaded changes (just on whole numbers as you have noted). This keeps in line with marketplace downloads etc. where the progress sometimes takes a long time to update (at least when I am forced to use my edge connection to download).
In your case if the files are that big I would use a second animation so the user knows that the download is still in progress. I would probably add a 'Downloading...' text above the actual percentage progress display and animate the ellipses.
It is relatively easy to report progress on a download that you perform yourself, but the BackgroundTransferService is controlled by the OS and hence must deal with resource allocation accross all apps. If you are using it then most of the time the user won't even see the progress display as they will be doing something else. This means that reporting progress too often is a waste of resources. If I was to download a 2GB file to my phone I would be checking progress every 20-30 minutes and I wouldn't be waiting for the progress display to be updated before I returned to whatever else I was doing.

Related

Freezing while downloading large datasets through Shodan?

I'm using Shodan's API through the Anaconda Terminal on Windows 10 to get data against the query below, but after a few seconds of running, the ETA timer freezes, and my network activity drops to zero. Hitting Control+C restarts it when this happens and gets it moving for a few seconds again, but it stops soon after.
shodan download --limit 3100000 data state:"wa"
Also, when it is running- the download speeds seem pretty slow; and I wanted to inquire if there was any way I can speed it up? My Universities internet is capable of upwards of 300 Mbps, but the download seems to cap at 5 Mbps.
I don't know how to solve either of these issues; my device has enough space and my internet isn't disconnecting. We've tried running the Anaconda Terminal as an Administrator, but that hasn't helped either.
I am not familiar with the specific website, but in general seeing limited speed or stopped downloads are not caused by things 'on your side' like the university connection, or even your download script.
Odds are that the website wants to protect itself, and that you need to use the api differently (for example with a different account). Or that you have some usage limits in place based on your account, that you hit.
The best course of action may be to contact the website and ask them how to do this.
I heard back from Shodan support; cross-posting some of their reply here-
The API is not designed for large, bulk export of data. As a result,
you're encountering a few problems/ limits:
There is a hard limit of 1 million results per search query. This means that it isn't possible to download all results for the search query "state:wa".
The search API performs best on the first few pages and progressively responds slower the deeper into the results you get.
This means that the first few pages return instantly whereas the 100th
page will take potentially 10+ seconds.
You can only send 1 request per second so you can't multiplex/ parallelize the search requests.
A lot of high-level analysis can be performed using search facets.
There's documentation on facets in the shodan.pdf booklet floating around their site for returning summary information from their API.

Why do software updates exist?

I know this may sound crazy but hear me out...
Say you have a game and you want to update it (add new features / redecorate for seasonal themes / add LTMs etc.) Now, instead of editing your code and then waiting days for your app market provider (Google/Microsoft/Apple etc.) to approve the update and roll out the changes, why not:
Put all of your code into a database
Remove all of your existing code from your code files
Add code which can run code from a database (reads it in and eval()s it)
This way, there'd be no need for software updates unless you wanted to change your database-related code, and you could simply update your database to change what the app does when it's live.
My Question: Why hasn't this been done?
For example:
Fortnite (a real game) often has LTMs (Limited Time Modes) which are available for a few weeks and are then removed. Generally, the software updates are ~ 5GB and take a lot of time unless your broadband is fast. If the code was fetched from a database and then executed, there'd be no need for these updates and the changes could be instantaneous.
EDIT: (In response to the close votes)
I'm looking for facts and statistics to back up reasons rather than just pure opinions. Answers like 'I think this would be good/bad ... ' aren't needed (that's why there's comments); answers like are 'This would be good/bad as this fact shows that ...' are much better and desired.
There are few challenges in your suggested approach.
putting everything in database will make increase the size of db. But that doest affect much.
If all the code is there in db, its possible to decompile your software and get a way to connect to your code?
too much performance overhead of evals. Precompiled code is optimized for their respective runtime
multiple versions? What to do when you want to have multiple versions of your software?
The main reason updates exists is they are easy to maintain, flexible and allows the developer to ship fastest optimized tool.
Put all of your code into a database
Remove all of your existing code from your code files
Add code which can run code from a database (reads it in and eval()s it)
My Question: Why hasn't this been done?
This is exactly how every game works already.
Each time you launch a game, an executable binary game engine (which you describe in step 3) already reads the rest of your code (often some embedded language like LUA) from a "database" (the file system) and "evals" (interprets) it to run the game, as well as assets like level geometry, textures, sounds and music.
You're talking about introducing a layer of abstraction (a real database) between the engine and its data to hide some of your assets, but the database stores its data on the file system, so you really haven't gained anything, you've just changed the way the data is encoded at rest and queried during runtime, and introduce a ton of overhead in both cases.
On the other hand, you're intentionally cheating your way through the app reviewal process this way, and whatever real technical problems you would have are moot, because your app would not be allowed in the app store. The entire point of the app reviewal process is to prevent people from shipping unverified unreviewed code to users, and if your program is obviously designed to circumvent this, your app will be rejected.
Fortnite (a real game) often has LTMs (Limited Time Modes) which are available for a few weeks and are then removed. Generally, the software updates are ~ 5GB and take a lot of time unless your broadband is fast.
Fotenite will have a small binary executable that is the game engine. Updates to this binary will account for a fraction of a percentage of that 5GB. The rest will be some kind of interpreted/embedded language describing the game's levels (also a tiny fraction) and then assets which account for the rest (geometry, textures, sound, music).
If the code was fetched from a database and then executed, there'd be no need for these updates and the changes could be instantaneous
This makes no sense. If you move that entire 5GB from the file system into a database, you still have to transfer around 5GB worth of database updates. 5GB of data in a database still lives as 5GB of data on the file system, it's just that you can't access it directly anymore. You have to transfer around the exact same amount of data, regardless of how you store it.

Chrome extension effect on page load time

I'm writing a Chrome extension and I want to measure how it affects performance, specifically currently I'm interested in how it affects page load times.
I picked a certain page I want to test, recorded it with Fiddler and I use this recording as the AutoResponder in Fiddler. This allows me to measure load times without networking traffic delays.
Using this technique I found out that my extension adds ~1200ms to the load time. Now I'm trying to figure out what causes the delay and I'm having trouble understanding the DevTools Performance results.
First of all, it seems there's a discrepancy in the reported load time:
On one hand, the summary shows a range of ~13s, but on the other hand, the load event arrived after ~10s (which I also corroborated using performance.timing.loadEventEnd - performance.timing.navigationStart):
The second thing I don't quite understand is how the number add up (or rather don't add up). For example, here's a grouping of different categories during load:
Neither of this columns sums up to 10s nor to 13s.
When I group by domain I can get different rows for the extension and for the rest of the stuff:
But it seems that the extension only adds 250ms which is much lower than the exhibited difference in load times.
I assume that these numbers represent just CPU time, and do not include any wait time. Is this correct? If so, it's OK that the numbers don't add up and it's possible that the extension doesn't spend all its time doing CPU bound work.
Then there's also the mysterious [Chrome extensions overhead], which doesn't explain the difference in load times either. Judging by the fact that it's a separate line from my extension, I thought they are mutually exclusive, but if I dive deeper into the specifics, I find my extension's functions under the [Chrome extensions overhead] subdomain:
So to summarize, this is what I want to be able to do:
Calculate the total CPU time my extension uses - it seems it's not enough to look under the extension's name, and its functions might also appear in other groups.
Understand whether the delay in load time is caused by CPU processing or by synchronous waiting. If it's the latter, find where my extension is doing a synchronous wait, because I'm pretty sure that I didn't call any blocking APIs.
Update
Eventually I found out that the reason for the slowdown was that we also activated Chrome accessibility whenever our extension was running and that's what caused the drastic slowdown. Without accessibility the extension had a very minor effect. I still wonder though, how I could see in the profiler that my problem was the accessibility. It could have saved me a ton of time... I will try to look at it again later.

Why is trying to open a TOpenDialog spawning a ton of threads?

I've got a very simple form with a TOpenDialog and a button on it. When I press the button, it calls Execute on the dialog. If I watch in the debugger, the act of opening the dialog box spawns something like 14 threads, and they don't go away when I close the dialog box either.
Anyone have any idea what's going on with that?
Imagine you want to show your friends how beautiful the Pacific Northwest is. You decide to set off on a trip to snap a few photos of sunset over the Pacific. What you really care about is the image files making their way home, where they can be uploaded to the Facebook. In reality the camera, lenses and the tripod need to be hauled over the Olympics and back. You also need to bring the photographer (yourself) who will set the camera up and press the shutter. The photographer needs to be moved there and back in relative comfort, so you take a seat on which the photographer will rest while making the trip. This seat is enclosed in a shiny metal box with a bunch of other metal, glass and rubber parts some of which are turning and reciprocating. In the end, about two tons of stuff (and a living human being) taking a multi-hour trip, burning gallons of hydrocarbon liquid -- with the goal of moving a few bits of information from the shore to the internet.
Exactly the same thing happens with your application. When the user wants to open a file using "Open File" dialog box, the user expects to be able to:
navigate to the directory containing the file (The directory may be on local hard drive or CD/DVD/BR or network drive or archive, etc. The media may be encrypted or compressed, which needs to be displayed differently. The media may not be plugged in, for which the user might need to be prompted. The media may require user's credentials, which have to be asked);
connect to a new directory using its URI/UNC (map the drive);
search the directory for some keywords;
copy/delete/rename some files;
see the list of the files in that directory;
preview the content of each file in the directory;
select which file to open;
change his/her mind and decide not to open the file;
do many other file-related things.
The OS lets all this to happen by essentially giving your process most of the Windows Explorer functionality. And some of it has to happen in the background, otherwise the users will complain about how unresponsive the Open File dialog is. The obvious way to run some tasks in background is to run them on different threads. So that's what we see.
What about the threads left behind, you ask? Well, some of them are left there for the case the user will decide to open another file: it saves a lot of time, traffic and typing in this case. That custom authentication used for this one particular process last time? -- stored. The preview icons for those pesky PDFs? -- still there. The length and bitrate for every movie in the directory? -- still available, no need to re-parse them.
Of course the threads did not just magically appear by themselves. Check out how many DLLs have been mapped into the process. Looking at some of them one can get quite an interesting picture of what functionality has been added.
Another interesting way to look at it would be to dump the callstacks at the moment every thread gets created. This shows which DLL (and sometimes which object) created them. Here's how a x64 Win7 creates all the threads. One can find the Explorer frame's thread getting created; some OLE activity which will be used to instantiate file filters, some of which can generate preview icons, overlays and tooltips; few threads belonging to search subsystem; shell's device enumerator (so if the user plugs in a new device, it will automatically appear in the open dialog); shell network monitor (ditto) and other stuff.
The good news is it happens fast and doesn't add too much overhead to your process. Most of the threads spend most of the time waiting for some seldom events (like USB key being plugged in), so the CPU doesn't spend any time executing them. Each thread consumes 1MB of virtual address space in your process, but only few 4Kb pages of actual physical memory. And most, if not all of those DLLs did not use any disk bandwidth to be loaded: they were already in RAM, so they just got mapped into your process for almost free.
In the end the user got a whole lot of useful functionality in a snappy UI, while the process had to do very little to achieve all that.

Fastest? ClientBundle vs plain URL images

Right now a large application I'm working on downloads all small images separately and usually on demand. About 1000 images ranging from 20 bytes to 40kbytes. I'm trying to figure out if there will be any client performance improvements by using a ClientBundle for the smaller most used ones.
I'm putting the 'many connections high latency' issue for the side now and just concentrate on javascript/css/browser performance.
Some of the images are used directly within CSS. Are there any performance improvements by "spriting" them vs using as usual?
Some images are created as new Image(url). Is it better to leave them this way, move them into CSS and apply styles dinamically or load from a ClientBundle?
Some actions have a result a setURL on an image. I've seen that the same code can be done with ClientBundle and it will probably set the dataURI for that image. Will doing improve performance or is it faster this way?
I'm specifically talking about runtime more than startup time, since this is an application which sees long usage times and all images will probably be cached in the first 10 minutes, so round-trip is not an issue (for now).
Short answer is not really (for FF, chrome, safari, opera) BUT sometimes for IE (<9)!!!
Lets look at what client bundle does
Client bundle packages every image into one ...bundle... so that all you need is one http connection to get all of them... and it requires only one freshness lookup the next time you load your application. (rather than n times, n being the number of your tiny images.. really wasteful.)
So its clear that client bundle greatly improves your apps load time.
Runtime Performance
There maybe times when one particular image fails to get downloaded or gets lost over the internet. If you make 1000 connections, the probability of something going wrong increases (however little). FF, Chrome, Safari, Opera simply put the image not found logo and move on with the running. IE <9 however, will keep trying to get those particular images, using up one connection of the two its allowed. That really impacts performance in IE.
Other than that, there will be some performance improvement if you keep loading new widgets asynchronously and they end up downloading images at a later stage.
Jai

Resources