Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've been trying to figure out how to add on to the functionality of JMeter for a couple days, and I'm sort of stumped. I basically want to build a testing functionality of a proprietary DB (it's not too important on the specifics here). However, the issue I am encountering is where to even begin with the creation of the functionality.
I've tried various stuff on the JMeter website (an example) and the wiki (an example), but it all boils down to I can't seem to find a repository which I can pull into eclipse (or with just building with ant, I can't seem to download_jars because it can't connect to the repo listed in there). Is there any up to date resources on how to build a JMeter plug in? Or am I doing something wrong here because I am inexperienced in setting up something like this?
Any help is greatly appreciated, but please don't just link the first thing on google; I have done quite a bit of searching already. Thanks!
Edit: It turned out the reason I couldn't get eclipse working with a repo was due to the network restrictions I had to deal with. When I tried on another computer/network, it worked fine. I used this jmeter tutorial, but since it is out of date regarding the repository (they use SVN now), I used http://svn.apache.org/repos/asf/jmeter as the root using subclipse. In case anyone runs into the same problem I did.
I have also searched for a building jmeter plugin for my graph plugin stuff. I got a simple and good source code from Ruben laguna's blog. You can understand the basic structure and steps to create jmeter plugin.
Check out this:
Graph plugin - http://rubenlaguna.com/wp/better-jmeter-graphs/
Enhanced-jdbc-sampler - http://rubenlaguna.com/wp/enhanced-jdbc-sampler-for-apache-jmeter-22/
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am trying to implement some tests for HTML parser written on ruby and using Nokogiri for parsing, it gets it's response from some http request.
Currently the test uses a fixture (html file that is saved). but the problem is that from time to time the real response changes (ids or classes of elements change) so that the parser won't parse it correctly. but the test still passes because it uses the static fixture.
Could you recommend an approach for dealing with such situations?
I see three possible ways to achieve this:
You create a rake task which updates the HTML file by downloading the new version from the Internet. When you want to deal with content, simply run the rake task and then run your tests.
You make your tests live. It means that instead of parsing your local file during your tests, you download the latest version and run your test with it.
It's a mix between 1 and 2. When you start your tests, you can set an ENV parameter such as LIVE=true. If LIVE is true, you're are going to download the latest version of your content from the Internet and save it locally. Then you'll run your tests by using the downloaded content.
If you run your tests with LIVE=false, you will not download the content from the Internet and simply use your downloaded content.
Make sense?
Hope it helps!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
While attempting to adopt more TDD practices lately on a project I've run into to a situation regarding tests that cover future requirements which has me curious about how others are solving this problem.
Say for example I'm developing an application called SuperUberReporting and the current release is 1.4. As I'm developing features which are to be included in SuperUberReporting 1.5 I write a test for a new file export feature that will allow exporting report results to a CSV file. While writing that test it occurs to me that the feature to support exports to some other formats are slated for later versions 1.6, 1.7, and 1.9 which are documented in a issue tracking software. Now the question that I'm faced with is whether I should write up tests for these other formats or should I wait until I actually implement those features? This question hits at something a bit more fundamental about TDD which I would like to ask more broadly.
Can/should tests be written up front as soon as requirements are known or should the degree of stability of the requirements somehow determine whether a test should be written or not?
More generally, how far in advance should tests be written? Is it OK to write a test that will fail for two years until the that feature is slated to be implemented? If so then how would one organize their tests to separate tests that are required to pass versus those that are not yet required to pass? I'm currently using NUnit for a .NET project so I don't mind specifics since they may better demonstrate how to accomplish such organization.
If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.
So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.
Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.
Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.
I don't have a lot of experience with TDD (just started recently), but I think while practicing TDD, tests and actual code go together. Remember Red-Green-Refactor. So I would write just enough tests to cover my current functionality. Writing tests upfront for future requirements might not be a good idea.
Maybe someone with more experience can provide a better perspective.
Tests for future functionality can exist (I have BDD specs for things I'll implement later), but should either (a) not be run, or (b) run as non-error "pending" tests.
The system isn't expected to make them pass (yet): they're not valid tests, and should not stand as a valid indication of system functionality.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Is anyone aware of command line tools that can validate CSS and/or HTML?
The W3C offers its validators for local installation, with directions to use from the command line, but the installation process is a nightmare for anyone who isn't a seasoned Java developer.
I've searched with Google, but can't find anything.
Ideally I'd like to use a tool (or tools) that I can point at my CSS, and have it report back on any errors. I want it to be local to increase the speed of my debugging cycles.
Ideally, the tools will understand HTML5 and CSS3.
There is tidy for HTML. It's more than a validator: it doesn't only check if your HTML is valid, but also tries to fix it. But you can just look at the errors and warnings and ignore the fix if you want.
I'm not sure how well it works with HTML5, but take a look at Wanted: Command line HTML5 beautifier, there are some parameter suggestions.
For CSS there is CSSTidy (I have never used it though.)
Regarding the W3C validator: if you happen to use debian/ubuntu, the package w3c-markup-validator is in the repositories and very easy to install via package management. Packages for other distos are also available.
And the W3C CSS validator is available as a jar, which is easy to use:
java -jar css-validator.jar http://www.w3.org/.
One of the most popular web-based validators is http://validator.nu.
On their About page, they list a command-line script (written in Python) for validation.
On Ubuntu, you can install the package w3c-markup-validator. It provides a CGI web interface. But you do not have to use it.
You can use my w3c-validator-runner to run the validator without having a webserver.
If that does not work, consider starting a webserver. You can then use srackham/w3c-validator.
WC3 has the source to their validators here: https://github.com/w3c
Although not directly a solution to your problem, you could consider using a CSS-extension framework for the validation part. I use SASS extensively in all my web projects and find it indispensible when you get used to it. Besides all the fancy mixins and variables features etc. it will also perform a validation of your CSS/SASS markup and report for errors as it is perfectly backwards compatible with regular CSS3. The nice thing is that it works as a Ruby Gem which means that it runs locally and can be integrated with other workflows through either Ruby or the command line (terminal in unix environment).
Take it for a spin: http://sass-lang.com/docs/yardoc/
Run sass style.scss and see what happens.
Not sure if this works but if you have Node & NPM there is: html-validator and html-validator-cli https://github.com/zrrrzzt/html-validator & https://github.com/zrrrzzt/html-validator-cli
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Short version
We need a Maven Doxia alternative being able to generate good looking PDFs (at least code snippets should be properly indented and have configurable font size). Maven guys proposed maven-pdf-plugin in DOXIA-419, but it has same problems. Aforementioned DOXIA-419 has details on difficulties we've experienced with Doxia.
Detailed version
We develop a BIG product providing Java/C/C++/C#/etc API. Tens of client-customized branches are maintained/developed simultaneously.
We need a tool to facilitate automatic document generation meeting these requirements:
Include arbitrary snippets from Java/XML/etc samples.
Confluence Snippet Plugin is a good example of this feature.
Generate good looking printable documents (e.g. PDF).
Generate online documents having clickable cross-references etc (e.g. HTML).
Unattended mode (e.g. should be easy to run document generation process from Ant script).
Documentation source content (from which PDFs/etc are later generated) should be kept in a human-readable easy-to-diff format.
Documentation source content should be kept in separate files (not Java sources).
Support (Java/xml/etc) syntax highlighting.
UPDATE:
8. Windows OS compatibility.
My open source project Dexy might work for you. It's an authoring tool rather than an automatic document-generation tool, so it's not like JavaDoc which creates a whole structure automatically. Source code and document content are kept separate, syntax highlighting support is very good, document snippets are available. I use LaTeX for good looking printable documents, but you could use any other text-based format that compiled to PDF if you preferred that. Re the clickable cross references, you'd have to write HTML templates which could then be populated automatically (I'm doing so now, replacing JavaDoc on a project). You can also run live code examples and include this output in your documentation.
http://dexy.it
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm curious. I'm looking into creating a CI server and wondering, after the first couple of obvious tasks, what else can an automated build do?
The tasks that I'm aware of (not in any order):
Compile (debug/release versions)
Code style conformance
Automated tests (unit/integration/etc.)
Code coverage
Version incrementing
Deployment
I'm not looking for the names of software, the build engine to use, or anything like that; just the repetitive and (maybe) important tasks that can be automated to make the build process ridiculously simple from an end-user perspective.
The simple answer to this, is basically anything that a script can be written for.
For example if you are using CruiseControl, anything that you can do from an ant script can be automated; and that includes calling other (not necessarily ant scripts as well).
That being said, you've got most bases covered in your initial list. To that I would add
Generation of documentation
Repository maintencnace and backup operations
Auto-update company website, e.g. whenever there's a new release of software, documentation is updated, etc
Reports, e.g. aggregate and summarise bug tracker issues and activity per project/ product
HTH
Building documentation
Building installers
Creating web sites
Initialising virtual images
Setting up databases
Reporting?
You may want to report the things you find during those tasks you outlined above. You could also do things such as duplication reporting, or if you run something like findbugs you could report on issues found (e.g. http://findbugs.sourceforge.net/bugDescriptions.html)
You could also generate a releasable package of the product in the build.
It all about automation. If you can find something that needs to be done, then automate it. For example you can do tonnes of code analysis, or testing. Ultiamtely it comes down to repeating things easily. Find what you need to do to improve quality and automate those(And I strongly fally down on the side of more testing is better).