How to increase ClickOnce download speed? - download

I have created application and deployed it using ClickOnce it size is about 30MB, at first it take time about 2 - 3 hours on 128KB lease-line. Then I have enable file compression on IIS and download time is reduce to about 40 minutes.
I want to decrease download time to 10 - 20 minutes but I have no idea how to do that.
My solution has 4 projects a, b, c and d, two of these project have very large size (nearly 15 MB). I try to decompose these two project but it design is very tightly couple and I have no time to do that.
If anyone have idea or solution for this problem please help me.

Well, about your only option is to find ways to compress/reduce the size of the projects themselves. Although, if you really have 128KB, it should not have taken anywhere near 2-3 hours for 30MB, unless you are running other data across that connection at the same time. As for the project itself, unless you can find a way to reduce the project size (you've already enabled compression in IIS) there's really nothing else open to you.

A 128 kilobits per second connection should be able to upload a 30mb file in about 33 minutes so it sounds like your connection is OK. You can possibly decrease the size of the file by getting rid of unnecessary dependencies (Visual Studio can perform this task for you, under Project Settings). You probably won't be able to decrease the file size very much, however.
I think the most reasonable thing for you to do is to try and upgrade your connection. Even my crappy consumer-grade upload rate nearly quadruples your 128kbps line.
Good luck.

Related

How does a software like Voidtools's Everything indexes more than 100k files in less than a second?

There is a software called "Everything" it indexes all the files in your machine, and find anything very fast; once the files are indexed.
I would expect the index phase to take few minutes but no. it takes few seconds to index a full computer. with multiple TB.
How is it possible? A simple loop over over the files would take much more.
What am I missing?
Enumerating files one-by-one through the official API would takes ages, indeed. But Everything reads the Master File Table (and later updates look at the USN Change Journal), according to the author himself, thereby bypassing the slow file enumeration API.
a full computer. with multiple TB
The total size of the files is not relevant, because Everything does not index file contents. MFT entries are 1KB each, so for 100K files you can expect to read on the order of 0.1GB to build an index from scratch (actually more because of non-file entries, but similar order of magnitude, of course less when updating an existing index). That's not really a lot of data after all, it should be possible to read it in under a second.
Then processing 100K entries to build an index may seem like a task that could be slow, but for sense of scale you can compare to the (tens of) billions of instructions that a contemporary computer can execute per second. "4GHz" does not exactly mean "4 billion instructions per second", but it's even better, even an old CPU like the original Pentium could execute several instructions per cycle. Just based on that scale alone, it's not unthinkable to build an index of 100K entries in a few seconds. Minutes seems excessive: that would correspond to millions of instructions per item, that's bad even for an O(n log n) algorithm (the base 2 log of 100K is about 17), surely we can do better than that.
threading/multiprocessing can drastically improve speeds. They are probably taking advantage of multiple cores. You said a simple loop over the files so i am assuming you don't know of threading/multiprocessing.

During migration of couchdb from 1.6.1 to 2.3.1, due to memory issues couchup utility taking lot of time rebuild views

During my migration of couchdb from 1.6.1 to 2.3.1, couchup utility is taking a lot of time to rebuild views. There are memory issues with couchup utility. The size of databases are in 500 GB range. It is taking forever. It has been almost 5 to 6 days and still not complete. Is there any way to speed it up?
When trying to do replicate, after 2-3 mins of couchup running, couchdb dies because of memory leak issues and again it starts. Replicate will take around 10 days. For replicate, it was showing progress bar but for rebuild views, it does not show progress bar. I don't know about the how much has been done.
The couchdb is installed in a RHEL Linux server.
reducing backlog growth:
As couchup encounters views that take longer than 5 seconds to rebuild, couchup is going to carry on calling additional view urls, triggering their rebuild. Once a number of long running view rebuilds are running even rebuilds that would have been shorter will take at least 5 seconds, leading to a large backlog. If individual databases are large or (map/reduce functions are very inefficient) it would probably be best to set the timeout to something like 5 minutes. If you see more than a couple:
Timeout, view is processing. Moving on.
messages it is probably time to kill couchup and double the timeout.
Observing Index growth
By default view_index_dir is the same as the database directory so if data is in /var/lib/couchdb/shards then /var/lib/couchdb is the configured directory and indexes are stored in /var/lib/coucdh/.shards. You can observe which index shard files are being created and growing or move view_index_dir somewhere separate for easier observation.
What resources are running out?
You can tune couchdb in general, it is hard to say whether tuning is needed once the system is not rebuilding all indexes, etc.
In particular, you would want to look for and disable any autocompaction. Look at files in /proc/[couchdb proc] to figure out the effective fd limits and how many open files there are and whether the crash happens around a specific number of open files. Due to sharding the number of open files is usually a multiple of the number of those in earlier versions.
Look at memory growth and figure out if it is stabalizing enough to use swap to prevent the problem.

Will using multiple storyboard files increase project build time?

My current application has only one story board file with ~100 view controllers. It takes ~10 minutes to load and ~1 minute to build. Will breaking the one storyboard file I currently have into ~20 storyboard files provide me with much, much quicker load and build times? Note: In the future, the application I am working on may reach over ~100 storyboard files with ~10 view controllers per file. Would this be a reasonable long term solution to this problem? I would like to get the build time down to ~15 seconds instead of several minutes.
We applied multiple storyboards approach instead of one storyboard approach and got a better result in time and teamwork.
Try to reuse your layouts by creating nib files.
Also, think about breaking your project to multiple projects/modules if possible. Try to code as modular as possible so you will have:
Less build time
Less maintenance cost
Reusing modules in other projects
Better teamwork
Making a storyboard smaller and simpler will definitely make it much faster to open it thru Interface Builder in the Xcode editor.
Moreover, make inferences your Main storyboard smaller and simpler will greatly decrease the launch time and initial memory footprint of your actual app.
However, many other factors influence build times. Xcode now lets you profile your build process, so take advantage of that feature. WWDC 2018 includes an extensive video on the topic of build times.

How to compare file upload and download speed at varying time in my application?

I would like to compare data read/write speed (i.e.; the file upload and download speed) with my application between various servers (like machineA, machineB and machineC) at varying times.
Just have tried to automate download with the help of curl as suggested here.
The network speed varies from time to time. Also I could not make parallel test runs between machines. In such case, what would be the best way to make "valid data read/write speed comparison with respect to network speed".
Is there any open source tools to do these speed tests?
Any suggestions would be greatly appreciated !
It will never be the same time, have this in mind. The best you can do is set the parameters equally for each test and run a number X of tests. You can get the average time of a series of tests. It's a good way to do this.
Another issue, I guess, it's your software itself. You can write a code to compare the times. You don't say what application is, but you have to write this code before the download (count time) and stop right after, without post-processing. Store the data (ID machine, download time, upload time, etc.) and then compare.
Hope it helps!

Is there a reason why SSIS significantly slows down after a few minutes?

I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64).
The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.)
The problem has the following qualities and resiliences:
Problem persists no matter what the target table is.
RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use.
Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high.
CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe)
Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s)
OLE DB destination and SQL Server Destination both exhibit this problem.
To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem.
I'll gratefully reward any reasonable answers / suggestions.
We finally found a solution... the problem lay in the fact that my client was using VMWare ESX, and despite the VM reporting plenty of free CPU and RAM, the VMWare gurus has to pre-allocate (i.e. gaurantee) the CPU for the SSIS guest VM before it really started to fly. Without this, SSIS would be running but VMWare would scale back the resources - an odd quirk because other processes and software kept the VM happily awake. Not sure why SSIS was different, but as I said, the VMWare gurus fixed this problem by reserving RAM and CPU.
I have some other feedback by way of a checklist of things to do for great performance in SSIS:
Ensure SQL login has BULK DATA permission, else data load will be very slow. Also check that the target database uses the Simple or Bulk Logged recovery model.
Avoid sort and merge components on large data - once they start swapping to disk the performance gutters.
Source sorted input data (according to the target table's primary key), and disable non-clustered indexes on target table, set MaximumInsertCommitSize to 0 on the destination component. This bypasses TempDB and log altogether.
If you cannot meet requirements for 3, then simply set MaximumInsertCommitSize to the same size as the data flow's DefaultMaxBufferRows property.
The best way to diagnose performance issues with SSIS Data Flows is with decomposition.
Step 1 - measure your current package performance. You need a baseline.
Step 2 - Backup your package, then edit it. Remove the Destination and replace it with a Row Count (or other end-of-flow-friendly transform). Run the package again to measure performance. Now you know the performance penalty incurred by your Destination.
Step 3 - Edit the package again, removing the next transform "up" from the bottom in the data flow. Run and measure. Now you know the performance penalty of that transform.
Step 4...n - Rinse and repeat.
You probably won't have to climb all the way up your flow to get an idea as to what your limiting factor is. When you do find it, then you can ask a more targeted performance question, like "the X transform/destination in my data flow is slow, here's how it's configured, this is my data volume and hardware, what options do I have?" At the very least, you'll know exactly where your problem is, which stops a lot of wild goose chases.
Are you issuing any COMMITs? I've seen this kind of thing slow down when the working set gets too large (a relative measure, to be sure). A periodic COMMIT should keep that from happening.
First thoughts:
Are the database files growing (without instant file initialization for MDFs)?
Is the upload batched/transactioned? AKA, is it one big transaction?)

Resources