Performance Testing using HP ALM - performance

I am new to testing and HP ALM (have worked with HP Quality Center though). As part of the process, I am required to do performance testing of interfaces belonging to several work streams like HR, Finance, etc.
There is already a functional testing carried out and test scripts available as part of them. How do I create test scripts as part of the performance testing?

Related

Continuous Integration on hardware-centric firmware?

How can CI be applied on occasions where implementation requires usage of a firmware's API (hardware-centric i.e. cameras, sensors etc.), or even in the most convenient case, just data obtained from hardware(*) and thus testing online - at least on my understanding - is considered to be impractical?
(*)emulation of data for testing purposes may be considered as a conventional alternative solution (i.e. using pseudo data that emulate data obtained from sensor)..
Are there any common practices for integrating CI on production stages of such hardware-dependent/embedded systems?
Most anything you can do on your local computer, you can reasonably do in a CI process. When it comes to hardware, some techniques to test this on cloud-provider CI systems might be to use software emulators. If a software simulator is not available, you might simply mock out those interfaces in your tests.
If testing on the actual hardware is important, you can attach your hardware to your CI job runner. For example, at my company we have proprietary hardware for our product. We have a 'test rack' which has several of these devices connected to our self-hosted GitLab CI runner. This gives developers who write the firmware, OS, and software that runs on the hardware the ability to script and test against the actual hardware.
Some popular hardware (mobile devices) are available 'as a service' in the cloud, too, in "device farms" (usually mobile devices like iPhones) such as AWS device farm; however based on your description of the devices you're working with, this doesn't sound like it will be available as a service.

Using TFS for automated testing

First of all, I apologize if this is a dumb question and has a million answers. I've not found anything that explains quite what I'm looking for in depth.
So then: My company utilizes TFS for our testing management. Sort of. We use GitHub independently from it for source management, we use Jira independently frm it for QA management, as well as bug reporting and tracking.
We "use" TFS for tracking the pass or fail for test cases. But they are in no way connected to any of our automation, we still require user presence to check automated tests and pass the test case on TFS. At that point, we should just be using a spreadsheet.
What I so dearly need help with is some sort of learning resource that focuses entirely on the test management and test automation aspects of TFS. I've been searching around but everything I find tends to focus on TFS as a bug reporting and source management system, rather than an automated test system.
And it's 100% possible (in fact, 100% likely) that we're simply utilizing the system wrong. The person who set up TFS originally left abruptly and we've been flailing with it ever since.
In particular we've had problems with:
Hooking a test case into an automated test (Our test code is one large data-driven CodedUI project with specific test methods to test specific things. Ideally we would be able to apply a test method result to a correlated test case)
Mass assignment of test cases. I spent a good hour today just assigning smoke tests to people because there was no way I could figure out to query "Test IDs 1300 through 1600 with configuration Windows 7 64 bit" and then mass assign within a sprint.
Proper organization of test cases and suites. My predecessor arranged everything in a particular sprint in a classic Tree-Form style, using static test suites. E.G. The sprint itself would be a static suite with a series of children static suites, which they themselves had static suites in. This looks nice format-wise, but is an absolute nightmare to manage (See the above issue, it's why it took an hour)
Basically I need some sort of resource I can learn from to tell me how to properly utilize TFS for these things.
You should look to Pluralsight's Microsoft Test Manager course.
http://www.pluralsight.com/courses/microsoft-test-manager-2013#!
And the automations course:
http://www.pluralsight.com/courses/codedui-test-automation
These will both give you a detailed look at the capabilities.

Fortify SCA/SSC Report for IA?

I'm a developer working on a mid-size c# application and am using the Fortify Secure coding plugin for Visual Studio 2010 to do static code analysis on a regular basis. We're nearing the end of this development cycle and I was asked to provide a vulnerability report to IA.
I've not had to do submit before, and IA doesn't appear to be familiar with the Fortify reports. My plan is to generate 2 or 3 reports and submit to IA so they can decide which is most appropriate for their use. I'm not quite certain which report(s), (with which options) would be appropriate for submission to IA. I also have access to generate reports from Audit Workbench and SSC.
So the question is, which Fortify report (with which configs) does your organization provide to your IA shop? Or more generically, what type of Static Analysis vulnerability information do you provide to IA?
Thank you in advance.
My guess is that "IA" stands for "Information Assurance". When you deal with infosec types, you need to be precise in language, since they need to be. I'm transitioning from being a developer to somewhat infosec, so it has been a challenge for me, too.
The IA team has asked you for a vulnerability report, which would be the output of penetration testing. The output of a penetration test would include weaknesses that are proven to be exploitable, whereas a static security assessment from static analysis would include weaknesses in the code that are not necessarily exploitable. There are also limits to the types of issues that static analysis can find, so it is in no way a replacement for dynamic assessments or penetration testing.
Many companies require results of multiple types of analysis as part of their requirement to approve an application. Manual penetration testing is sometimes done, but frequently a scanner such as WebInspect or AppScan is used to scan an application in place of manual pen testing. When results from a web app scanner are combined with static analysis results, it covers both potential weaknesses in code as well as typical vulnerabilities in a deployed application (when running in an environment like your production environment).
You should work with your IA team to determine the process of vetting an application for deployment to production, as well as who is responsible for which steps in the process. You will likely need to schedule them to conduct pen testing on your application in your QA or functional testing environment.
As for report, if they want the results of your static analysis, I'd look at generating a report that's less than 20 pages for them to start, which includes the most prevalent issues in web applications, assuming you're writing a web application. I'm partial to the CWE/SANS Top 25 2010 report in SSC, without the "Detailed Report" option.
I would generate the FPR with SCA, extract the FPR file (unzip), open the audit.fvdl, parse the Vulnerabilities XML tag to create a list of Issues and upload them to the reporting software. If some translations are needed, it can be done during the upload. You can check how Sonar plugin does something like this.

Is there a workable approach to use test-driven development (TDD) in a COBOL application?

Has anyone come across any workable approaches to implementing test-driven development (and potentially behavior-driven development) in/for COBOL applications?
An ideal solution would enable both unit and integration testing of both transactional (CICS) and batch-mode COBOL code, sitting atop the usual combination of DB2 databases and various fixed width datasets.
I've seen http://sites.google.com/site/cobolunit/, and it looks interesting. Has anyone seen this working in anger? Did it work? What were the gotchas?
Just to get your creative juices flowing, some 'requirements' for an ideal approach:
Must allow an integration test to exercise an entire COBOL program.
Must allow tests to self-certify their results (i.e. make assertions a la xUnit)
Must support both batch mode and CICS COBOL.
Should allow a unit test to exercise individual paragraphs within a COBOL program by manipulating working storage before/after invoking the code under test.
Should provide an ability to automatically execute a series of tests (suite) and report on the overall result.
Should support the use of test data fixtures that are set up before a test and torn down afterwards.
Should cleanly separate test from production code.
Should offer a typical test to production code ratio of circa 1:1 (i.e., writing tests shouldn't multiply the amount of code written by so much that the overall cost of maintenance goes up instead of down)
Should not require COBOL developers to learn another programming language, unless this conflicts directly with the above requirement.
Could support code coverage reporting.
Could encourage that adoption of different design patterns within the code itself in order to make code easier to test.
Comments welcome on the validity/appropriateness of the above requirements.
Just a reminder that what I'm looking for here is good practical advice on the best way of achieving these kinds of things - I'm not necessarily expecting a pre-packaged solution. I'd be happy with an example of where someone has successfully used TDD in COBOL, together with some guidance and gotchas on what works and what doesn't.
Maybe check out QA Hiperstation. It could cost a lot though (just like every other mainframe product).
It only used it briefly a long time ago, so I cannot claim to be an expert. I used it to run and verify a battery of regression tests in a COBOL/CICS/DB2/MQ-SERIES type environment and found it to be quite effective and flexible.
I would say this could be one of the pieces of your puzzle, but certainly not the whole thing.
Regardless of how you build/run unit tests, you likely need a summary of how well the tests are doing and how well tested the resulting software is.
See our SD COBOL Test Coverage tool, specifically designed for IBM COBOL.
This answer may not be as easy as you (and I) had hoped.
I have heard about COBOLunit before, but I also don't think it's currently being maintained.
Our team develops an enterprise software product for managing Auto/Truck/Ag dealerships the vast majority of which is in AcuCOBOL.
We were able to break some ground in possibly using JUnit (unit testing for Java) to execute and evaluate COBOL unit tests.
This required a custom test adapter that can serve as the piping and wiring for data between the COBOL unit tests and the JUnit framework. In the application to be tested we will then need to add/design hooks that will evaluate the input as test case data, perform the test to which the data relates, and report results to the adapter.
We are at the beginning of this experiment and haven't gotten much past the "it's possible" phase into "it's valuable". The first foreseeable snag (which I think exists in all TDD) is how to build harnesses into the program.

Testing a wide variety of computers with a small company

I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate.
One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming.
I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing?
EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2?
Tom
The first question I would be asking is if you have any automated testing being done?
By this I mean mainly mean unit and integration testing. If not then I think you need to immediately look into unit testing, firstly as part of your build processes, and second via automated runs on servers. Even with a UI based application, it should be possible to find software that can automate the actions of a user and tell you when a test has failed.
Apart from the tests you as developers can think of, every time a user finds a bug, you should be able to create a test for that bug, reproduce it with the test, fix it, and then add the test to the automated tests. This way if that bug is ever re-introduced your automated tests will find it before the users do. Plus you have the confidence that your application has been tested for every known issue before the user sees it and without someone having to sit there for days or weeks manually trying to do it.
I believe logging application activity and error/exception details is the most useful strategy to communicate technical details about problems on the customer side. You can add a feature to automatically mail you logs or let the customer do it manually.
The question is, what exactly do you mean to test? Are you only interested in error-free operation or are you also concerned how the software is accepted at the customer side (usability)?
For technical errors, write a log and manually test in different scenarios in different OS installations. If you could add unit tests, it could also help. But I suppose the issue is that it works on your machine but doesn't work somewhere else.
You could also debug remotely by using IDE features like "Attach to remote process" etc. I'm not sure how to do it if you're not in the same office, likely you need to build a VPN.
If it's about usability, organize workshops. New people will be working with your application, and you will be video and audio recording them. Then analyze the problems they encountered in a team "after-flight" sessions. Talk to users, ask what they didn't like and act on it.
Theoretically, you could also built this activity logging into the application. You'll need to have a clear idea though what to log and how to interpret the data.

Resources