Extremely long build with Gradle (Android Studio) - gradle
right now we are in a situation of having build times 2 minutes 30 seconds for very simple change. This (compared to ANT) is amazingly slow and is killing the productivity of the whole team.
I am using Android Studio and using the "Use local gradle distribution".
I've tried to give more memory to gradle:
org.gradle.jvmargs=-Xmx6096m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
A lot more memory. AND IT IS STILL GIVING ERRORS FOR MEMORY from time to time.
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: GC overhead limit exceeded
Amazing. I am using the parallel option and daemon:
org.gradle.parallel=true
org.gradle.daemon=true
It doesn't really help.
I've put the aforementioned parameters in ~/.gradle/gradle.properties (and I even doubted that Android studio is ignoring that, so I tested - it is not ignoring it).
Still from terminal I get 1:30 build time vs 2:30 in Android Studio, so not sure what is wrong there. 1:30 is STILL CRAZY compared to Ant. If you know what Android Studio is doing (or ignoring, or rewriting as gradle config), I'd be grateful to know.
So just CMD + B (simple compile) is super fast after changes, like 7 seconds.
But when it comes to running the app, it starts the task dexXxxDebug, which is just killing us.
I've tried putting
dexOptions {
preDexLibraries = false
}
Doesn't help.
I understand that gradle is probably still not ready for production environments, but I'm starting to regret our decision to move so early to it.
We have lots of modules, which is probably part of the problem, but that was not an issue with Ant.
Any help appreciated,
Dan
Some more information about execution times:
Description Duration
Total Build Time 1m36.57s
Startup 0.544s
Settings and BuildSrc 0.026s
Loading Projects 0.027s
Configuring Projects 0.889s
Task Execution 1m36.70s
The time eater:
:app:dexDebug 1m16.46s
I'm not quite sure why Android Studio is slower than the command line, but you can speed up your builds by turning on incremental dexing. In your module's build file, add this option to your android block:
dexOptions {
incremental true
}
In that dexOptions block you can also specify the heap size for the dex process, for example:
dexOptions {
incremental true
javaMaxHeapSize "4g"
}
These options are taken from a thread on the adt-dev mailing list (https://groups.google.com/forum/#!topic/adt-dev/r4p-sBLl7DQ) which has a little more context.
Our team was facing the same issue.
Our project exceeds the method limit for dex(>65k).
So, in out library project we put below options in build.gradle:
dexOptions {
jumboMode = true
preDexLibraries = false
}
In our project build.gradle:
dexOptions {
jumboMode = true
// incremental true
}
previously we had incremental true. after commenting it its taking around 20s to run as compared to 2mins 30 seconds.
I don't know this may solve your problem. but it can help to others. :)
Disclaimer: This isn't a solution - it's a statement that there is no solution with relevant link sources to prove it.
Since all answers here do not solve a problem that has been lingering since 2014, I'm gonna go ahead and post a couple of links which describe a very simillar problem and present OS-specific tweaks that might or might not help, since OP did not seem to specify it, and solutions vary a lot across them.
First is the actual AOSP bug-tracker issue referring to parallelization, with a lot of relevant stuff, still open and still with a lot of people complaining as off version 2.2.1. I like the guy who notes the issue (a high-priotity one at that) id including "666" not being a coincidence. The way most people describe music programs and mouse movement stuttering during builds feels like looking into a mirror...
You should note people report good stuff with process lasso for Windows, while I see none really reporting anything good with renice'ing or cpu-limiting in *nix variants.
This guy (who states he doesn't use gradle) actually presents some very nice stuff in Ask Ubuntu, that unfortunately doesn't work in my case.
Here is another alternative that limits threads of gradle execution, but that didn't really improve in my scenario probably due to what somebody says on another link about studio spawning multiple gradle instances (while the parameter only affects one instance's parallelism).
Note that this all goes back to the original "666", high-priority issue...
Personally I couldn't test many of the solutions because I work on a managed (no root privs) Ubuntu machine and can't apt-get/renice, but I can tell you I have an i7-4770, 8GB RAM and a hybrid SSD, and I have this problem even after a lot of memory and gradle tweaks over the years. It is a tantalizing issue, and I can't understand how Google hasn't committed the necessary human resources to the gradle project to fix something that is at the core of development for the most important platform they build.
One thing to note on my environment is: I work in a multi-dependency studio project, with about 10 subprojects, all of them building on their own and filling up the gradle pipeline.
When passing a value, you can append the letter 'k' to indicate kilobytes, 'm' to indicate megabytes, or 'g' to indicate gigabytes.
'--offline' solved my problem.
Related
Ho to compile deeplearning4j pretrained models into java bytecodes?
Why ask: I have trained a multi-layer network to recognize some specific soundwaves. It worked perfectly well, and costed only ~1ms to work. However, when I tried to migrate it to Android, I found to my astonishment that the apk is over 1GB. I checked my model and found that it was merely 50kb.I want to turn this model into java code that works without DL4J.I don’t care performance loss since the soundwave lengths 10secs each, and I have 10secs to waste. What have I tried & result: reduce the maven dependency into deeplearning4j-nn.(1GB->150MB). cut the javacpp native libraries(150MB->100MB). try to make a reduced C++ library(I can't. C++ is beyond me). What I want: I want it to be smaller than 1M. What would be helpful: agibsonccc mentioned that "years back" there was an util for this. I'd appreciate if somebody can simply provide me with the git version(or a clone would be even better). Though I have searched through Github and found nothing, there can be some that I have missed. If there are any related projects, even unstable ones and abandoned ones, please inform me. some other ways.
Vs 13 Nuget package manager taking forever to uninstall packages
I've been looking around to see what support I can find relating to this but it seems non-existent for my particular issue. As an example I started to uninstall jquery.datatables 1.10.10 from my project but after 30min it's still going. The only obvious sign of whats going on is the progress bar seems to keep freezing in place for like 30 seconds before moving again. Does anybody have any ideas what could be causing this or suggestions on how to diagnose this?
It seems it was due to team foundation which actions were also running slowly and once I dropped about half the projects I'd selected it made a major impact on both. Additionally I'll be reviewing my Workspace mappings in case that had a knock on effect which others seem to indicate it can.
Specflow feature file results into out of memory exception
We are using specflow quite extensively to perform our Acceptance testing. Number of scenarios/variants available in all feature files have easily gone beyond 10K. We are experiencing a strange issue while compiling these feature files as VS build results into Out of memory exception. If we remove some of the scenarios it builds okay. Just wanted to know if there is any better way to address this rather than excluding the scenarios Thanks in advance Amit
Xcode: Can't Build Downloaded Project because Compiler Cannot Find a Framework
I'm new to this so please make allowances. I'm trying to build Audioslicer which seems to need a framework called IntervalSlider. The IntervalSlider build fails with: framework not found InterfaceBuilderKit. However, the framework seems to be present under the Frameworks group with the necessary headers. Can anyone suggest what I'm doing wrong? Thanks
Well, this looked interesting so I downloaded the source. I built it and got an entire slew of build errors. It looks like According to source forge, this project hasn't been update since 2006-12-04 and the default SDK is still set to 10.4. This project uses a bunch of uncompiled libraries/frameworks which need to be compiled to work. Some of them may no longer compile now nearly 4 years later and on new hardware/OS. This is a complex project which mixes, Objective-C, vanilla C and C++, so it's not the kind of project a novice can reasonably expect to get working. (I'm not even sure I could get it updated.) In short, this looked like a good idea in its day but the project has gone silent, stale and out of date. You'll need to find an alternative unless you want to spend weeks or months (1) learning how to build such a complex project and (2) tracking down all the updated versions of libraries (assuming they exist.) I advise looking for another solution. Too bad because this looked like a really neat idea. Such is the fate of a most FOSS. It takes too much drudgery coding to keep something like this up to date. All the fun in coding comes from the creation. Maintenance coding is about as fun as washing the dishes. Few are will to undertake such a chore year-in-year-out without pay. In the future, always check the last project update date. If its more than a year or the before the last major OS rev, expect problems.
Should static analysis warnings fail the CI build?
Our team is investigating various options for static analysis in our project, and have mixed opinions about whether we want our Continuous Integration build to fail because of warnings from static analysis. The argument against failing the build is that there are often exceptions to the rules, and attempting to work around them just to make the build succeed reduces productivity. A better approach would be to generate reports with the build, and regularly dedicate developer time to addressing the reported issues. The counter-argument is that it is easy for the technical debt to build up if the bugs are not addressed immediately. Also, if the build fails when a potential bug is introduced, the amount of time required to fix it is reduced. What are your thoughts?
Personally I'd rather see the build fail. While some warnings are false positives, warnings can be excluded using a SuppressMessageAttribute using a Justification. When doing this, you are sure that every warning is evaluated by developers and nothing simply slips through.
It's probably a good idea to fail the build, but this doesn't have to be an absolutely black and white decision. Hudson lets you fail a build if a certain threshold of new static analysis faults is exceeded. So you can say "mark the build as unstable if 1 new fault is introduced; mark the build as failed if 5 new faults are introduced". This is something that's built into the various analysis plugins available for Hudson.
I typically make the build fail on static analysis errors (not only the CI build but also the one that runs on developers machine before to commit and I use tools that can be included in the IDE). If you don't do this, my experience is that errors don't get fixed and actually never will because if you consider errors as cosmetic (or you wouldn't allow the commit, right?), there will always be something more important to do. If there is a justified exception, most tools allow to exclude pieces of code (with things like a custom comment or an exclusion filter). If you want to use static analysis, do it right, fix the problem when it occurs, don't let an error propagate further into the system. A process that lets this happen is wrong: Let's make toast American style: you burn, I'll scrape. --W. Edwards Deming.
Tough call, without a good global answer. I’d like to agree with the two previous postings and say yes, but my Second Law of Static Analysis says that defects will congregate in parts of the organization where the software engineering process is most badly broken. A corollary is that engineers who are forced to change their code in a hurry to make the warnings go away, are the ones most likely to introduce new problems when they do so; I’ve seen depressingly many such bugs. I think it’s better software engineering to do the false-positive marking outside the code, as in, e.g., Coverity and Klocwork, and do your enforcement based on that. It goes without saying that your main point about tracking such new defects, as loudly as possible, and dedicating time promptly to avoiding technical debt, are excellent ideas.
In addition to failing on errors, you need a process to investigate the warnings, and to decide whether some of them should become errors.