IRS A2A - Using AATS once in production for logic testing - irs

In the past (2017) we have completed the required AATS testing scenarios and completed them successfully. The IRS has subsequently moved our TCC to production and have been filing in 2017 and 18 without issue.
My company is in the process of rebuilding its SOAP application and would like to run a number of test scenarios of our own to help validate some logic as well as a few edge cases – Are we able to send data to AATS (using IRS name controls) to read the IRS responses?

Yes you should be able to still transmit over the AATS service, you'll just need to switch over the required pieces of information like the endpoint and TCC. I still occasionally use AATS on my local copy of our service to test any issues that people bring up.

Related

How to track & trend end to end performance (the client experience)

I am trying to figure out how best to track & trend end to end performance between releases. By end to end I mean, what is the experience form the client visiting this app via a browser. This includes download time, dom rendering, javascript rendering, etc.
Currently I am running load tests using Jmeter, which is great to prove application and database capacity. Unfortunately, Jmeter will never allow me to show a full picture of the user experience. Jmeter is not a browser therefore will never simulate javascript and dom rendering impact. IE: if time to first byte is 100ms, but it takes the browser 10 seconds to download assets and render the dom, we have problems.
I need a tool to help me with this. My initial idea is to leverage Selenium. It could run a set of tests (login, view this, create that) and somehow record timings for each. We would need to run the same scenario multiple times and likely through a set of browsers. This would be done before every release and would allow me to identify changes in the experience to the user.
For example, this is what I would like to generate:
action | v1.5 | v1.6 | v1.7
----------------------------------------
login | 2.3s | 3.1s | 1.2s
create user | 2.9s | 2.7s | 1.5s
The problem with selenium is that 1. I am not sure if it is designed for this and 2. it appears that DOM ready or javascript rendering is realllly hard to detect.
Is this the right path? Does anyone have any pointers? Are there tools out there that I could leverage for this?
I think you have good goals, but I would split them:
Measuring DOM rendering, javascript rendering etc. are not really part of "experience from the client visiting this app via a browser", because your clients are usually unaware that you are "rendering dom" or "running javasript" - and they don't care. But they are something I'd want to address after every committed change, not just release to release, because it could be hard to trace degradation back to particular change if such test is not running all the time. So I would put it in continuous integration on build level. See a good discussion here
Then you probably would want to know if server side performance is the same or worsened (or is better). For that JMeter is ideal. Such testing could be done on some schedule (e.g. nightly or on each release) and can be automated using for example JMeter plug-in for Jenkins. If server side performance got worse, you don't really need end-to-end testing, since you already know what will happen.
But if server is doing well, then "end user experience" test using a real browser has a real value, so Selenium actually fits well to do this, and since it can be integrated with any of the testing frameworks (junit, nunit, etc), it also fits into automated process, and can generate some report, including duration (JUnit for instance has a TestWatcher which allows you to add consistent duration measurement to every test).
After all this automation, I would also do a "real end user experience" test, while JMeter performance test is running at the same time against the same server: get a real person to experience the app while it's under load. Because people, unlike automation, are unpredictable, which is good for finding bugs.
Regarding "JMeter is not a browser". It is really not a browser, but it may act like a browser given proper configuration, so make sure you:
add HTTP Cookie Manager to your Test Plan to represent browser cookies and deal with cookie-based authentication
add HTTP Header Manager to send the appropriate headers
configure HTTP Request samplers via HTTP Request Defaults to
Retrieve all embedded resources
Use thread pool of around 5 concurrent threads to do it
Add HTTP Cache Manager to represent browser cache (i.e. embedded resources retrieved only once per virtual user per iteration)
if your application is build on AJAX - you need to mimic AJAX requests with JMeter as well
Regarding "rendering", for example you detect that your application renders slowly on a certain browser and there is nothing you can do by tuning the application. What's next? You will be developing a patch or raising an issue to browser developers? I would recommend focus on areas you can control, and rendering DOM by a browser is not something you can.
If you still need these client-side metrics for any reason you can consider using WebDriver Sampler along with main JMeter load test so real browser metrics can also be added to the final report. You can even use Navigation API to collect the exact timings and add them to the load test report
See Using Selenium with JMeter's WebDriver Sampler to get started.
There are multiple options for tracking your application performance between builds (and JMeter tests executions), i.e.
JChav - JMeter Chart History And Visualisation - a standalone tool
Jenkins Performance Plugin - a Continuous Integration solution

Performance improvement for web services

We have a webservice, which will be called to provide the delivery date of the product, while purchasing in eComm website.
We are using IBM Sterling Order Management in the backend, and its OOB webservice and its OOB service.
This webservice (WSDL) is taking more time, more than 40 seconds, which create timeoutexception in other integrated systems (Middleware).
So we want to improve the performance of this webservice. Could you please help me to provide the way to improve the performance ? Will it be improved if the Server's spec has been upgraded ? As it the OOB service, we can't customize it.
First of all you need to figure out the performance bottleneck. To start with you could put a verbose trace on the OOB Webservice. Use the logs and see if you can zero-in on any particular component or sql taking consuming majority of the time. If it's sql, you can tune/baseline the OOB query/tables using indexes.
If you have any user exits implemented (for the OOB API), ensure that they are lean and aren't making any expensive API calls like changeOrder API.
One of the questions to be asked here would be if the webservice needs to respond with the actual processing results or if it could move the actual processing to the background eg: separate integration server and just respond with a simple acknowledgement of the webservice request. If the service only needs to respond with an acknowledgement you could possibly move the actual processing to a separate async service.
First try to find out where the actual problem is and hence here the few pointers,
1) Check in OMS how much time the service is taking with the same input which you are using ti invoke the webservice.
2) If from OMS end response time is fine then check the network latency/bandwidth.
3) CPU usage while hitting the webservice.

Load testing WCF services gives huge (>200 sec) responses

I have a service being load tested by a third party. A few minutes after starting, we start to see requests hanging for a very long period of time and the caller ultimately times out (after 60 seconds).
They are testing with 15 users with each user using two devices at once, so a total of 30 connections.
The service is a simple façade to a more complex operation, calling an external system. Benchmarking our communications to the external system looks as though everything is responding in the time we would expect (sub 200ms).
The IIS logs reveals a bunch of very high requests (> 200sec) which ultimately do return a 200 and have Win32 error code ERROR_NETNAME_DELETD (error 64). I have checked the Service Log and can match up the response to the request (based on the SOAP message id) and can see that we do eventually respond with the correct information (although the client has long given up).
Any ideas as to what could be causing this behavior? We're hosting in IIS using wsHttpBinding and we're using WS-Security with x509 certificates (message & transport encryption).
We don't have benchmark logging inside of our service but the code is a very simple mapping of the WCF request to the server request, making the request, and mapping the response to the WCF response. We do this manually and there is no parsing involved (straight assignments).
After a detailed investigation, including getting Microsoft support involved we were hitting up against the serviceThrottling defaults, specifically the maxConcurrentSessions. We determined this from perfmon - there is a counter for this. We were unsure as to why we saw this as the service behaved when called with a .NET client.
It turns out that the Java consumer of this application, using CXF, was not respecting the WSDL (specifically the bit about WS-SecureConversation) and closing sessions out when it closed its connection.
Our solution was to jack up the maxConcurrentSessions to a high number, set the inactivityTimeout down low (to a minute) to force session abandonment. In addition, we set establishSecurityContext to false to avoid the WSS negotiation consuming an additional session.
The solution is inelegant as the service logs are littered with errors about forced session closures, but it fixed the issue we were seeing here. Unfortunately we had a requirement for WS-Security so our solution needed to stick with that.
I hope this helps someone as this was an interesting and time consuming problem to pin down.

is this braintree testing multi purchase error something I should worry about?

I'm trying to figure out how to test with braintree, and I'm running into what feels like a bandwidth error.
response = ::Braintree::Customer.create(payment_method_nonce: Braintree::Test::Nonce::Transactable)
token = response.customer.credit_card.first.token
#so far so good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#still good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#response is failure
# => Braintree::ErrorResult ... status: "gateway_rejected"
All that takes place without a pause.
If I wait a bit and run the sale line again it works again..
This of course sets up a problem with test scripts. I can moc-out the actual connection to BT, but I'm slightly worried about this. Should I be?
I work at Braintree. If you have more questions, you can always get in touch with our support team.
You can see what gateway_rejected means on the transaction statuses page of the API docs:
Gateway rejected
The gateway rejected the transaction because AVS, CVV, duplicate or fraud checks failed.
Transactions also have a gateway rejection reason, which in this case will be duplicate.
You can find more information about duplicate checking settings in the control panel docs:
Configure duplicate transaction checking
Duplicate transaction checking is enabled by default with a 30-second window in both the sandbox and production environments. These settings can be updated or disabled by users with Account Admin privileges.
Log into the Control Panel
Navigate to Settings > Processing > Duplicate Transaction Checking
Click Edit to adjust the time window or Enable/Disable to turn the feature on/off
Looks like it may be a rate-limit error. Search their help/docs/site about information related to rate limiting so you can know what the limits are and work around them.
However...if you're talking about testing as in automated tests - I would recommend not using external services in your test suite, and mocking out everything. Ideally you want your test suite to be able to run even when the network connection is down and you don't want it slowing down when 3rd party services are slow or when your network is slow.
If you really want to do a full integration test with all your 3rd party services, you can create a special set of tests that do that that are annotated with something like "#external", and then schedule them to run once a week or something just to flag some weird changes or errors.

Client-Side API for TFS / MTM TestRunner?

Does anyone know for a fact whether or not the microsoft test manager for tfs 2010 throws any client side events like OnFinished/OnSaved etc? I am asking because our business process requires certain minimum amount of information in each test run/result to be provided prior to closing the testrun (e.g. in case of a failed step a reason and or defect id has to be provided at the affected steps' comment field).
Postprocessing / report-driven checks etc makes basically means the tester can make 'errors' and we'll have to re-test the whole tc instead of having a prompt process check which allows fixing the tc immediately..
Possibly, by developing a Custom Diagnostic Data Adapter you could implement this.
Using perhaps the TestCaseEnd event, you can get to the TestElement via the TestCaseEndEventArgs and do your processing.

Resources