I'm developing with Play 2.1.1-RC1 in ~run mode.
Is there a way to avoid checking of all the dependencies each time I update my classes?
For example excluding org.hibernate.javax.persistence, be.objectify.deadbolt-core and others would cut compilation time from 14 to 3 secs for me.
ADDITION:
I've found that it's possible to add "offline := true" to Build.scala and plugins.sbt. With this option it stops resolving from remote repos, but still it takes around 10 secons to resolve from local one. I'm looking for a way to disable resolving completely. My goal is to minimize compilation time as much as possible.
Use run instead of ~run.
Check out the play console instructions: http://www.playframework.com/documentation/2.0/PlayConsole
Related
I am trying to install Fuchsia Source code but it is giving me error while fetching CIPD packages.
I tried the solution in this stackoverflow post but i received the error. here
I am just getting started with this and don't know a lot about it. I will be grateful for any help I can get.
I would recommend re-running jiri update, it will eventually complete. At present the prebuilts fetched from CIPD are large and can take a long time in certain regions, or with slower internet connections. The downloads are performed incrementally, and so restarting the process should make forward progress and eventually complete. I don't remember offhand the flag, but there is also a flag you can pass to jiri to explicitly increase the timeout.
why does the statement:
v8::Isolate* isolate = v8::Isolate::New(create_params);
take 27 seconds to run?
This is from the "hello-world.cc" example from the V8 source.
I actually just got an answer from the Chromium team regarding this. The long bootstrapping of an isolate is an intended behavior in debug mode. To fix this, you can apparently either:
Build and link in release mode.
Use a snapshot to move the bootstrapping process from runtime to build time.
The below command is taking ages with no output or anything, is there an alternative way to download packages for go language.? I am new to golang.
go get -u github.com/gogits/gogs/
PS: my net connection is not that slow, downloading through git takes around 1 min, but i can't individually do that for all dependencies.
Edit 1: Small packages like go get github.com/tools/godep downloads and install flawlessly, i have the problem only with github.com/gogits/gogs/ . Its stuck there for an hour. Even a download progress would have been helpful.
go get command is working but its terribly slow for packages that have lots of dependencies. the -v flag(not in doc) helped me to get a verbose output of what is happening.
Took almost an hour to finish. Culprit might possibly be github or my ISP.
Verbose output should had made default by the developers.
*sorry for my English.
You can use GoDep which is a powerful Go dependency manager: https://github.com/tools/godep
GoDep also helps you to have a predictable build since it freezes dependencies in your app
Very easy to get started, just follow instruction on its Github page.
right now we are in a situation of having build times 2 minutes 30 seconds for very simple change. This (compared to ANT) is amazingly slow and is killing the productivity of the whole team.
I am using Android Studio and using the "Use local gradle distribution".
I've tried to give more memory to gradle:
org.gradle.jvmargs=-Xmx6096m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
A lot more memory. AND IT IS STILL GIVING ERRORS FOR MEMORY from time to time.
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: GC overhead limit exceeded
Amazing. I am using the parallel option and daemon:
org.gradle.parallel=true
org.gradle.daemon=true
It doesn't really help.
I've put the aforementioned parameters in ~/.gradle/gradle.properties (and I even doubted that Android studio is ignoring that, so I tested - it is not ignoring it).
Still from terminal I get 1:30 build time vs 2:30 in Android Studio, so not sure what is wrong there. 1:30 is STILL CRAZY compared to Ant. If you know what Android Studio is doing (or ignoring, or rewriting as gradle config), I'd be grateful to know.
So just CMD + B (simple compile) is super fast after changes, like 7 seconds.
But when it comes to running the app, it starts the task dexXxxDebug, which is just killing us.
I've tried putting
dexOptions {
preDexLibraries = false
}
Doesn't help.
I understand that gradle is probably still not ready for production environments, but I'm starting to regret our decision to move so early to it.
We have lots of modules, which is probably part of the problem, but that was not an issue with Ant.
Any help appreciated,
Dan
Some more information about execution times:
Description Duration
Total Build Time 1m36.57s
Startup 0.544s
Settings and BuildSrc 0.026s
Loading Projects 0.027s
Configuring Projects 0.889s
Task Execution 1m36.70s
The time eater:
:app:dexDebug 1m16.46s
I'm not quite sure why Android Studio is slower than the command line, but you can speed up your builds by turning on incremental dexing. In your module's build file, add this option to your android block:
dexOptions {
incremental true
}
In that dexOptions block you can also specify the heap size for the dex process, for example:
dexOptions {
incremental true
javaMaxHeapSize "4g"
}
These options are taken from a thread on the adt-dev mailing list (https://groups.google.com/forum/#!topic/adt-dev/r4p-sBLl7DQ) which has a little more context.
Our team was facing the same issue.
Our project exceeds the method limit for dex(>65k).
So, in out library project we put below options in build.gradle:
dexOptions {
jumboMode = true
preDexLibraries = false
}
In our project build.gradle:
dexOptions {
jumboMode = true
// incremental true
}
previously we had incremental true. after commenting it its taking around 20s to run as compared to 2mins 30 seconds.
I don't know this may solve your problem. but it can help to others. :)
Disclaimer: This isn't a solution - it's a statement that there is no solution with relevant link sources to prove it.
Since all answers here do not solve a problem that has been lingering since 2014, I'm gonna go ahead and post a couple of links which describe a very simillar problem and present OS-specific tweaks that might or might not help, since OP did not seem to specify it, and solutions vary a lot across them.
First is the actual AOSP bug-tracker issue referring to parallelization, with a lot of relevant stuff, still open and still with a lot of people complaining as off version 2.2.1. I like the guy who notes the issue (a high-priotity one at that) id including "666" not being a coincidence. The way most people describe music programs and mouse movement stuttering during builds feels like looking into a mirror...
You should note people report good stuff with process lasso for Windows, while I see none really reporting anything good with renice'ing or cpu-limiting in *nix variants.
This guy (who states he doesn't use gradle) actually presents some very nice stuff in Ask Ubuntu, that unfortunately doesn't work in my case.
Here is another alternative that limits threads of gradle execution, but that didn't really improve in my scenario probably due to what somebody says on another link about studio spawning multiple gradle instances (while the parameter only affects one instance's parallelism).
Note that this all goes back to the original "666", high-priority issue...
Personally I couldn't test many of the solutions because I work on a managed (no root privs) Ubuntu machine and can't apt-get/renice, but I can tell you I have an i7-4770, 8GB RAM and a hybrid SSD, and I have this problem even after a lot of memory and gradle tweaks over the years. It is a tantalizing issue, and I can't understand how Google hasn't committed the necessary human resources to the gradle project to fix something that is at the core of development for the most important platform they build.
One thing to note on my environment is: I work in a multi-dependency studio project, with about 10 subprojects, all of them building on their own and filling up the gradle pipeline.
When passing a value, you can append the letter 'k' to indicate kilobytes, 'm' to indicate megabytes, or 'g' to indicate gigabytes.
'--offline' solved my problem.
I have a custom Heroku Buildpack that compiles CMake and OpenCV. The problem is, OpenCV takes FOREVER to compile. I've tried precompiling OpenCV and pulling it in during my build; however, I have not yet been successful in doing so.
I recently came across the COMPILE_TIMEOUT=n env variable that can be set to override the 15 minute timeout, but it's not working. Does anyone know if this env is still supported? Or if there is another approach besides precompiling?
I would ideally like to have the flexibility of compiling on the fly if I update to the latest version of OpenCV (compilations are cached on Heroku so I'm not waiting around for a full build on every deploy).
I think your best shot would be to build your binaries beforehand. However, Heroku still doesn't have great support for this.
See these links for some suggestions:
https://discussion.heroku.com/t/compiling-a-custom-binary-for-buildpack/224
https://discussion.heroku.com/t/opencv-and-statically-compiled-python/105
Pre-compiling binaries is the way to go; however, it requires time and effort that I'd rather avoid. I reached out to Heroku and they were willing to increase our build time to 30 minutes. Unfortunately, 30 minutes was still not enough to compile OpenCV. The Heroku team was kind enough to Anvil which happens to be the same build service that runs on Heroku. Looks promising!
https://github.com/ddollar/heroku-anvil