When opening JMeter 3.2 GUI WorkBench exists always
WorkBench also can't be deleted although when saving Test Plan, WorkBench isn't saved by default.
It's also confusing for newbies that we can add logic and samplers to WorkBench which won't be executed and saved.
I almost never use WorkBench so I wanted to know why WorkBench is important in JMeter.
Why WorkBench is shown as default in JMeter and can't be deleted?
Imagine you have a wood shop. You have a WorkBench and you are building a bird house (Test Plan). Sometimes, you need to get out your saws (Test Script Recorder and Recording Controller) and other tools. You have wood (recordings) on your bench that you aren't yet sure belongs on your bird house. When you leave your wood shop, you don't care about anything that is on your WorkBench you only care about the bird house you are working on. When you come back, you want to only see the birdhouse and an empty WorkBench. So after you leave, your housekeeper puts the tools away and throws away all the scrap wood you have on your workbench.
So that's how to think of it. Primarily, it is a place to use your Test Script Recorder. If you don't use this tool, the Workbench probably doesn't make as much sense. When a system I have been testing on does an upgrade, I sometimes will open the JMeter script I have and do a new recording to my WorkBench and then compare to what I had previously recorded to see if any changes in the API calls have been made. It keeps things tidy because the last thing you want is some extra junk call in your test plan that you didn't want there. Sometimes, when I'm debugging a certain call(s), I will copy the old stuff to the work bench, just incase I muck things up in the Test Plan.
It appears that my question lead to a new enhancement of UX : Remove Workbench
Currently not developed (in status NEW).
EDIT
Workbench was removed in JMeter version 4.0!
Related
Sometimes i need to add new fonts to use in reports, but i can't restart production server everytime, is there any way to add new fonts for the reports and take effect without restart the server?
To my knowledge, there is not, but you may as well just do the exercise once.
On the Oracle Support site, there is a note 261879.1 which gives a script to add a suite of fonts to the Report server configuration in a single pass. You could use this (or tailor it) to add all the fonts you anticipate to ever need.
I want to migrate from star team 2005 to TFS 2010 with HISTORY. Is there any tool or any way where I can do it cost effectively. I know about Timely Migration tool, but it is too expensive.
There is no tool out there to do this. You are stuck with paying for Timely Migration or writing it yourself. Capturing history from StarTeam is extremely complex. The reason for this is because of what the view looks like historically. You can roll back the view to a point in time, and this alone works very well, but rolling back to every point in time where a change happened to the view is practically impossible to do using the API. This is because 1) not everything has an Audit record, so you can't use the audits, and 2) the audit records are purged, 3) there is a special feature to "play back" the history of the view to generate the listener events (requires MPX), but this will miss many events, 4) when items are shared, configured, branched, etc., these do not generate any audits in the project, 5) even if they did, getting every single change requires iterating through the view history down to the second to get all of the changes by analyzing the differences. So that means if your project has been active for a month and each time you analyze the two view configurations to diff them it takes 5 seconds, then actually migrating your project would take 5 months, and in the meantime it would be locked down.
So the next way to do this would be to establish "baselines" to compare. Using build labels is a good starting point if you have nightly or continuous builds in your project, or even just certain builds that were QA certified. This way you can use these baselines as points for diff/compare and then bring in the history that way. While this isn't as granular as a full history, it is by definition getting the most important differences to migrate over.
However, keep in mind that even doing it this way does not maintain the links between branch/merge points across the different branches/views. The only way to do this would be to go directly into the StarTeam database to get this information.
I went through all of these steps to try and write my own suite of tools to migrate from StarTeam to Subversion. It was fun and interesting and imperfect but had some promise, but ultimately never finished it. Part of the reason was because the time involved would have been far more than the value I got from it.
Which brings you inevitably to the most important question: what is the business value of maintaining full history? After going through this many times with project teams as a StarTeam administrator, more than 90% of the time it was readily apparent that the better approach is a cut-over. Make a time where you can begin working on new work in the new system and freeze work in the old system. It usually can be done with very little down time for the project team. You can even start by bringing over a history of Production releases to create a rough timeline in the new system. Use your existing comparison tools, either in TFS or BeyondCompare or elsewhere, to reproduce each state of your project source code, doc, etc. and reconcile it with your TFS project by checking in or deleting files as needed, and labeling your TFS project for each build you bring over. Line up all your TFS builds, work items, users, roles, etc, and make sure everything is ready. Then at time of cut-over, take the latest development snapshot from StarTeam over and do one more update to your TFS project. Lock your Starteam users out of the project (for checkins anyway), and begin working in TFS. Your TFS project will have a rough history of the most significant baselines and you will be able to keep your StarTeam repository open to users in case more history is needed.
One other thing to consider is how to create a permanent archive of your project. If your repository is small enough it is doable, but gets more time intensive the larger your project is. First, copy your entire database and vault to a separate instance and get that copy up and running. Then delete all other projects EXCEPT the projects you want to archive off. Run an online purge and make sure to run it to completion. You may need to restart your server and purge several times. When you are done, your entire repository should contain only the files and database records that are needed by your project. At this point you can back up your database and vault and keep them indefinitely. This reduces the size of your existing StarTeam repository.
Haven't used StarTeam in over 3 years but that was a fun ride back in time. Hope you found it useful.
This question may be a bit nebulous, so please bear with me.
I am using Visual Studio, and I am new to the entire realm of unit testing. One thing I do a lot though is use Unit Testing as a quick and dirty ad-hoc "administration ui" at times when I need to just TRY things, but don't have time to make an actual admin system.
What I mean is ... sometimes I just want to get some data thrown into my database to see how it looks on a page. So I'll make a dirty unit test...
[Fact]
public void install_some_test_data(){
using(var database = RavenDocumentStore()){
using(var session = database.OpenSession()){
// create some objects
// add some objects
// save some objects
}
}
}
Nowhere in here have I really cared about "testing", I just like the fact that I can right click and say "Run it" and it'll go, without launching up the program, without having to have a lot of interaction, etc. etc.
So my question is this;
Is this okay to do? I realize this isn't how a program should be long term managed, but I was scolded for doing this by another developer simply because I wanted to quickly show them how something they wanted to see worked. Is there really a problem with using these convenient tools as simpler ways of running commands against my database that can give me immediate feedback on whether it passed or failed?
My followup question to that is this; I have had to actually search pretty hard to find a unit test runner that lets me run individual methods instead of running every single unit test in a class. Is this normal? This whole thing kind of confuses me in general. Even when I do use tests for actually testing, why would I always want to run EVERY test every time? Isn't that extremely wasteful of time and resources? How do you control the order that they run in? This seems even more imperative when tests have to depend on some data to exist before they can run appropriately.
At the moment, I am using xunit and then the unit test runner in ReSharper, but out of the box, Visual Studio doesn't seem to let me run unit tests individually, just as huge batches in a single class.
Am I the only person confused by this?
Sure, it's actually very easy to execute a single unit test (without an external test runner required).
Go to TEST -> Windows -> Test Explorer
Now you'll see a window like this
Now you can rightclick on the test you want to execute and select 'run selected methods'.
As to your other question: it's hard to distinguish what you are asking. Are you demonstrating a proof of concept with a unit test? How does this test replace your admin panel?
Just put your cursor on a Test function name. Then go to Test -> Run Selected Scope (or similar - the name changes by context). That test should execute and also create a test list for you automatically.
I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/
I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.