Rspec - Mock server running out of memory - ruby

We are using a 3rd party library that occasionally and randomly runs out of memory and will crash the app.
Is there any way for us to simulate this in Rspec?
Actually calling exit will actually end the test suite, and we are not aiming to test that the app has naturally called exit. Rather, we're testing that the backup mechanisms set up prior to calling the 3rd party library functioned properly.

Related

Gracefully stopping Quarkus application running as Windows service with JNI

We have a Quarkus application that is deployed as an uber jar which is wrapped by an in-house framework that runs it as a Windows service. The framework mirrors the Windows API closely and expects the application to be able to start, then run and finally stop gracefully when signaled by the OS.
The current solution always returns true for start without doing anything (not ideal, but works); it then calls Quarkus.run(); from the run method, which blocks. When stop is called, it calls Quarkus.asyncExit();. That stops the application.
The problem is that Quarkus seems to stop using System.exit(), which makes Windows think the service crashed. Is there a way to avoid that and make it return instead, passing control back to the run method (that called Quarkus.run())?
I realize this is not a typical use case for Quarkus, but graceful controlled start and stop from a calling program should be generally useful. We have not yet tried preventing exit with a security manager, as that seems a bit up in the air with Java 17; see jdk-8199704.

Detect UI operation which will "hang" the application if running in service mode

Fellow experts!
I have faced the following dilemma: some of our tools (executables) are started as scheduled tasks, some are started as services and others as usual desktop apps with interactive Windows user. We are using the code sharing strategy for source management (this is not debatable for this question).
So the solution I want to find is the following:
Detect UI operation at run-time which leads to hanging service/background task (such as say call to Application.ShowException, ShowMessage, MessageDialog, TForm.Show etc.). And when such an action detected I want to raise the exception instead. Then the operation will fail, we will have stack trace etc. but the process will not hang up! The most problematic hang up is when some event processing is done in transaction and then in some of the code used to process event suddenly (because of error in code, design, whatever) there is UI code executed then the process hangs and the DB parts can be locked!
What I think I need to do is: Use DDetours library to intercept WinAPI calls to a certain routines and raise exception instead (so that the process does not hang, but just fail in some method). Also I know that the creation of forms and windows does not hang the app, but only the tries to show them to the user.
Is there some known method of handling this problem? Or maybe there is some list of WinAPI routine set which hangs in service mode?
Thank you in advance.

What are some best practices when calling external executable from ASP.NET WEB API 2

I am in need to call an external *.exe compiled in C++
from ASP.NET WEB API 2 using Process (System.Diagnostics)
This executable does some image processing stuff and use lot of memory.
SO my question is if change my API calls to Async. or implement threads will it help, Or it doesn't matter?
Note: All i have is executable so i can not go for a CLI Wrapper.
You can separate the two. Your api is one thing, it needs to be fast, responsive to be able to serve the clients. Your image processing thing is different.
You could implement a queuing system. The api is responsible for adding a new item to this queue and nothing more. You could keep track of what tasks are being run in a separate sql table let's say. Imagine you have a sql table called Tasks. Your api chucks data in there and the status is "Not Running".
Some other app which lives on another machine entirely keeps an eye on this table and takes care of running that executable for each item. When it starts, it changes the status to Running, when it completes it's Done. You do whatever else you need. You could have an api endpoint which takes the ID of the task so your client can keep calling this endpoint to see what the status is. Or you could raise an event when it's done, depending on your application needs.
Bottom line, keep things separate, you gain nothing for blocking the api while a resources heavy task is running. Think what happens if you start that process 5 times, at the same time. You've just killed your api basically.
The app that does the heavy work, could even be located on a separate machine, so it doesn't affect the api at all.

TDD Scenario: Looking for advice

I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.

Application Tests VS Logic Tests

Since application tests can now be run on the simulator from Xcode, what would the advantage be, apart from possibly a small saving in execution time, of still separating your tests into logic and application tests?
The differentiation as per the Apple docs:
Logic tests. These tests check the correct functionality of your code in a clean-room environment; that is, your code is not run inside an application. Logic tests let you put together very specific test cases to exercise your code at a very granular level (a single method in class) or as part of a workflow (several methods in one or more classes). You can use logic tests to perform stress-testing of your code to ensure that it behaves correctly in extreme situations that are unlikely in a running application. These tests help you produce robust code that works correctly when used in ways that you did not anticipate. Logic tests are iOS Simulator SDK–based; however, the application is not run in iOS Simulator: The code being tested is run during the corresponding target’s build phase.
Application tests. These tests check the functionality of your code in a running application. You can use application tests to ensure that the connections of your user-interface controls (outlets and actions) remain in place, and that your controls and controller objects work correctly with your object model as you work on your application. Because application tests run only on a device, you can also use these tests to perform hardware testing, such as getting the location of the device.
Application tests compared to logic tests are really used for two different things:
Logic tests/unit tests are used to test very small behavior for one or a few methods, e.g. "Given that I create my object like this, is the value of a certain property what I expect it to be?"
Application tests however are used to test the big picture, e.g. "Do I get the right data in my detail view when I tap on a certain table view cell?"

Resources