We are using specflow quite extensively to perform our Acceptance testing.
Number of scenarios/variants available in all feature files have easily gone beyond 10K.
We are experiencing a strange issue while compiling these feature files as VS build results into Out of memory exception. If we remove some of the scenarios it builds okay.
Just wanted to know if there is any better way to address this rather than excluding the scenarios
Thanks in advance
Amit
Related
I know that the 5.0 release note say "After the migration, source syntax-highlighting won't be available on a project until it has been successfully analyzed"
BUT, i can't imagine that there is no way to activate just by running another analysis. In fact, when you have thousands of components (it's our case), you can't plan 4500 analysis just to "restore" a basic but helpful functionality ! And it's more true when you know that the majority of theses components wasn't changed since a time ago... :(
So, please, say me that we can write a little batch or program that will do the job without need to pull all the sources ! I don't know how because i don't' understand this limitation of this upgrade (why sources aren't accessible)
You should trust the release notes. Information required for syntax highlighting is computed during analysis. Note that it also requires the language plugins to support this feature. I suggest to upgrade them to latest versions.
I recently started working on a large Rails application. Simplecov says test coverage is above 90%. Very good.
However now and again I discover files that are not even loaded by the test suite. These files are actually used in production but for some reason nobody cared enough to even write the simplest test about them. As a result they are not counted in the coverage metrics.
It worries me since there is an unknown amount of code that is likely to break in prod without us noticing.
Am I the only one to have this problem? Is there a well-known solution? Can we have coverage metrics for files not loaded?
The contributors added new config optiontrack_files exactly for this purpose. For rails project the value could look like this
track_files '{app,lib}/**/*.rb'
More details here: https://github.com/colszowka/simplecov/pull/422
I ended up adding this to my environments/test.rb:
config.eager_load = true
config.eager_load_paths += ["#{config.root}/lib"]
However adding lib/ can have downsides, such as loading generators and such. This post does a good job at explaining each approach pros and cons.
I'm trying to analyze the programmers profile. So I'm looking for people that is duplicating code, and trying to understand why they're doing this.
My idea is identify (if is lazy, lack of knowledge, etc) and attack the problem in root.
Is there anyway to see only the duplications added ONLY in last analysis of SonarQube?
Just checked on nemo and the time machine view just tells you how much code duplication was added since last analysis, but doesn't actually link to the new issues unlike other metrics. Most likely it's not supported yet..
right now we are in a situation of having build times 2 minutes 30 seconds for very simple change. This (compared to ANT) is amazingly slow and is killing the productivity of the whole team.
I am using Android Studio and using the "Use local gradle distribution".
I've tried to give more memory to gradle:
org.gradle.jvmargs=-Xmx6096m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
A lot more memory. AND IT IS STILL GIVING ERRORS FOR MEMORY from time to time.
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: GC overhead limit exceeded
Amazing. I am using the parallel option and daemon:
org.gradle.parallel=true
org.gradle.daemon=true
It doesn't really help.
I've put the aforementioned parameters in ~/.gradle/gradle.properties (and I even doubted that Android studio is ignoring that, so I tested - it is not ignoring it).
Still from terminal I get 1:30 build time vs 2:30 in Android Studio, so not sure what is wrong there. 1:30 is STILL CRAZY compared to Ant. If you know what Android Studio is doing (or ignoring, or rewriting as gradle config), I'd be grateful to know.
So just CMD + B (simple compile) is super fast after changes, like 7 seconds.
But when it comes to running the app, it starts the task dexXxxDebug, which is just killing us.
I've tried putting
dexOptions {
preDexLibraries = false
}
Doesn't help.
I understand that gradle is probably still not ready for production environments, but I'm starting to regret our decision to move so early to it.
We have lots of modules, which is probably part of the problem, but that was not an issue with Ant.
Any help appreciated,
Dan
Some more information about execution times:
Description Duration
Total Build Time 1m36.57s
Startup 0.544s
Settings and BuildSrc 0.026s
Loading Projects 0.027s
Configuring Projects 0.889s
Task Execution 1m36.70s
The time eater:
:app:dexDebug 1m16.46s
I'm not quite sure why Android Studio is slower than the command line, but you can speed up your builds by turning on incremental dexing. In your module's build file, add this option to your android block:
dexOptions {
incremental true
}
In that dexOptions block you can also specify the heap size for the dex process, for example:
dexOptions {
incremental true
javaMaxHeapSize "4g"
}
These options are taken from a thread on the adt-dev mailing list (https://groups.google.com/forum/#!topic/adt-dev/r4p-sBLl7DQ) which has a little more context.
Our team was facing the same issue.
Our project exceeds the method limit for dex(>65k).
So, in out library project we put below options in build.gradle:
dexOptions {
jumboMode = true
preDexLibraries = false
}
In our project build.gradle:
dexOptions {
jumboMode = true
// incremental true
}
previously we had incremental true. after commenting it its taking around 20s to run as compared to 2mins 30 seconds.
I don't know this may solve your problem. but it can help to others. :)
Disclaimer: This isn't a solution - it's a statement that there is no solution with relevant link sources to prove it.
Since all answers here do not solve a problem that has been lingering since 2014, I'm gonna go ahead and post a couple of links which describe a very simillar problem and present OS-specific tweaks that might or might not help, since OP did not seem to specify it, and solutions vary a lot across them.
First is the actual AOSP bug-tracker issue referring to parallelization, with a lot of relevant stuff, still open and still with a lot of people complaining as off version 2.2.1. I like the guy who notes the issue (a high-priotity one at that) id including "666" not being a coincidence. The way most people describe music programs and mouse movement stuttering during builds feels like looking into a mirror...
You should note people report good stuff with process lasso for Windows, while I see none really reporting anything good with renice'ing or cpu-limiting in *nix variants.
This guy (who states he doesn't use gradle) actually presents some very nice stuff in Ask Ubuntu, that unfortunately doesn't work in my case.
Here is another alternative that limits threads of gradle execution, but that didn't really improve in my scenario probably due to what somebody says on another link about studio spawning multiple gradle instances (while the parameter only affects one instance's parallelism).
Note that this all goes back to the original "666", high-priority issue...
Personally I couldn't test many of the solutions because I work on a managed (no root privs) Ubuntu machine and can't apt-get/renice, but I can tell you I have an i7-4770, 8GB RAM and a hybrid SSD, and I have this problem even after a lot of memory and gradle tweaks over the years. It is a tantalizing issue, and I can't understand how Google hasn't committed the necessary human resources to the gradle project to fix something that is at the core of development for the most important platform they build.
One thing to note on my environment is: I work in a multi-dependency studio project, with about 10 subprojects, all of them building on their own and filling up the gradle pipeline.
When passing a value, you can append the letter 'k' to indicate kilobytes, 'm' to indicate megabytes, or 'g' to indicate gigabytes.
'--offline' solved my problem.
Microsoft has a lot of stuff in there, but I'm wondering what features of Visual Studio Team System people really like and really use.
I'm specifically thinking about Team System as opposed to plain old Visual Studio.
What makes it worth the price?
I use the Development version of VSTS2005 and evaluating 2008. My top picks:
Profiler
Coding guidelines -- rules enforcement part
My favorite
Profiler
Integrated Testing Environment: I know a lot of people prefer other test frameworks but having the integration is just sweet.
FxCop
Some of the best features come from adding Team Foundation Server:
Continuous integration builds can be set up to run unit tests on every build
Code coverage figures can be gathered based on the unit test run
Reports of build success, unit test success, code coverage %, etc. can be produced daily
Code check-in can mark a work item (bug report) fixed, or can start the workflow to do so
It not only gives the developers a better idea what's going on with their code, and of how to fix it (unit tests, code coverage, code analysis), it also gives Management an overall picture of the same, without having to come around and bug the developers individually.
I like the line-by-line blame, profiler (as mentioned), but more importantly, I like the reports it produces, such as defect rates over time.
However, even though there are plenty of features that I like, I certainly don't think it provides good value for money.