My current application has only one story board file with ~100 view controllers. It takes ~10 minutes to load and ~1 minute to build. Will breaking the one storyboard file I currently have into ~20 storyboard files provide me with much, much quicker load and build times? Note: In the future, the application I am working on may reach over ~100 storyboard files with ~10 view controllers per file. Would this be a reasonable long term solution to this problem? I would like to get the build time down to ~15 seconds instead of several minutes.
We applied multiple storyboards approach instead of one storyboard approach and got a better result in time and teamwork.
Try to reuse your layouts by creating nib files.
Also, think about breaking your project to multiple projects/modules if possible. Try to code as modular as possible so you will have:
Less build time
Less maintenance cost
Reusing modules in other projects
Better teamwork
Making a storyboard smaller and simpler will definitely make it much faster to open it thru Interface Builder in the Xcode editor.
Moreover, make inferences your Main storyboard smaller and simpler will greatly decrease the launch time and initial memory footprint of your actual app.
However, many other factors influence build times. Xcode now lets you profile your build process, so take advantage of that feature. WWDC 2018 includes an extensive video on the topic of build times.
Related
During my migration of couchdb from 1.6.1 to 2.3.1, couchup utility is taking a lot of time to rebuild views. There are memory issues with couchup utility. The size of databases are in 500 GB range. It is taking forever. It has been almost 5 to 6 days and still not complete. Is there any way to speed it up?
When trying to do replicate, after 2-3 mins of couchup running, couchdb dies because of memory leak issues and again it starts. Replicate will take around 10 days. For replicate, it was showing progress bar but for rebuild views, it does not show progress bar. I don't know about the how much has been done.
The couchdb is installed in a RHEL Linux server.
reducing backlog growth:
As couchup encounters views that take longer than 5 seconds to rebuild, couchup is going to carry on calling additional view urls, triggering their rebuild. Once a number of long running view rebuilds are running even rebuilds that would have been shorter will take at least 5 seconds, leading to a large backlog. If individual databases are large or (map/reduce functions are very inefficient) it would probably be best to set the timeout to something like 5 minutes. If you see more than a couple:
Timeout, view is processing. Moving on.
messages it is probably time to kill couchup and double the timeout.
Observing Index growth
By default view_index_dir is the same as the database directory so if data is in /var/lib/couchdb/shards then /var/lib/couchdb is the configured directory and indexes are stored in /var/lib/coucdh/.shards. You can observe which index shard files are being created and growing or move view_index_dir somewhere separate for easier observation.
What resources are running out?
You can tune couchdb in general, it is hard to say whether tuning is needed once the system is not rebuilding all indexes, etc.
In particular, you would want to look for and disable any autocompaction. Look at files in /proc/[couchdb proc] to figure out the effective fd limits and how many open files there are and whether the crash happens around a specific number of open files. Due to sharding the number of open files is usually a multiple of the number of those in earlier versions.
Look at memory growth and figure out if it is stabalizing enough to use swap to prevent the problem.
I have been attempting to create a Dash app as a companion to a report, which I have deployed to heroku:
https://ftacv-simulation.herokuapp.com/
This works reasonably well, for the simplest form of the simulation. However, upon the introduction of more complex features, the heroku server often times out (i.e. a single callback goes over the 30 second limit, and the process is terminated). The two main features are the introduction of a more complex simulation, which requires 15-20 simple simulation runs, and the saving of older plots for the purposes of comparison.
I think I have two potential solutions to this. The first is restructuring the code so that the single large task is broken up into multiple callbacks, none of which go over the 30s limit, and potentially storing the data for the older plots in the user's browser. The second is moving to a different provider that can handle more intense computation (such as AWS).
Which of these approaches would you recommend? Or would you propose a different solution?
I have a web app that allows users to insert short advert videos (30 to 60 seconds) into a longer main video (typically 45 minutes, but file sizes can vary widely).
The entire process involves:
Importing all selected files from s3
Encoding each to a common scheme, ipad-high.
Extracting clips from the main video.
Concatenating all clips from the main video with the advert videos.
For n videos to be inserted into the main video, n + 1 clips will be extracted.
Since Transloadit does not provide any estimates on how long an assembly may run, I'm looking to find a way to estimate this myself so I can display a progress bar or just an ETA to give users an idea of how long their jobs will take.
My first thought is to determine the total size of all files in the assembly and save that to some redis database, along with the completion time for that.
Subsequent runs will use this as a benchmark of sorts, i.e if 60GB took 50 minutes, how long will 25GB take.
The data on redis will be continually updated (I guess I could make the values a running average of sorts) to make the estimates a reliable as possible.
Any ideas are welcome, thanks :)
I'll para-phrase some of the conversation had over at Transloadit regarding this question:
Estimating the duration of an assembly is a complex problem to solve due to how many factors go into the calculation, for example: how many files are in a zip that is being uploaded? how many files in the directory that is will be imported? how many files will pass the filter on colorspace: rgb? These are things that are only found out as the Assembly runs - but they can wildly alter the ETA
There are plans for a dashboard that will showcase graphs with information on your Assemblies - such as throughput in Mbit/s, combined with historical data on the Template and filesizes, this could be used for rough estimations.
One suggestion was that instead of an ETA, it may be easier to implement a progress bar showcasing when each step or job has been completed. The downside with this is of course the accuracy, but it may be all you need for a front-facing solution
You may also be interested in looking into turbo mode. If you're using the /video/encode or /video/concat robot, it may help dramatically reduce the encoding speeds
I would like to compare data read/write speed (i.e.; the file upload and download speed) with my application between various servers (like machineA, machineB and machineC) at varying times.
Just have tried to automate download with the help of curl as suggested here.
The network speed varies from time to time. Also I could not make parallel test runs between machines. In such case, what would be the best way to make "valid data read/write speed comparison with respect to network speed".
Is there any open source tools to do these speed tests?
Any suggestions would be greatly appreciated !
It will never be the same time, have this in mind. The best you can do is set the parameters equally for each test and run a number X of tests. You can get the average time of a series of tests. It's a good way to do this.
Another issue, I guess, it's your software itself. You can write a code to compare the times. You don't say what application is, but you have to write this code before the download (count time) and stop right after, without post-processing. Store the data (ID machine, download time, upload time, etc.) and then compare.
Hope it helps!
I have created application and deployed it using ClickOnce it size is about 30MB, at first it take time about 2 - 3 hours on 128KB lease-line. Then I have enable file compression on IIS and download time is reduce to about 40 minutes.
I want to decrease download time to 10 - 20 minutes but I have no idea how to do that.
My solution has 4 projects a, b, c and d, two of these project have very large size (nearly 15 MB). I try to decompose these two project but it design is very tightly couple and I have no time to do that.
If anyone have idea or solution for this problem please help me.
Well, about your only option is to find ways to compress/reduce the size of the projects themselves. Although, if you really have 128KB, it should not have taken anywhere near 2-3 hours for 30MB, unless you are running other data across that connection at the same time. As for the project itself, unless you can find a way to reduce the project size (you've already enabled compression in IIS) there's really nothing else open to you.
A 128 kilobits per second connection should be able to upload a 30mb file in about 33 minutes so it sounds like your connection is OK. You can possibly decrease the size of the file by getting rid of unnecessary dependencies (Visual Studio can perform this task for you, under Project Settings). You probably won't be able to decrease the file size very much, however.
I think the most reasonable thing for you to do is to try and upgrade your connection. Even my crappy consumer-grade upload rate nearly quadruples your 128kbps line.
Good luck.