TimeoutException Occurred When Invoke JDI invokeMethod() - debugging

I am developing my customized debugger as an eclipse plugin. I am using the JPDA API to this end. I would like to retrieve the value of some object-reference variable. Therefore, I try to use ObjectReference.invokeMethod by invoking toString() method. My code is as follows:
if(thread.isSuspended()){
Method method = retriveToStringMethod(...);
Value messageValue = objValue.invokeMethod(thread, method, new ArrayList<Value>(), ObjectReference.INVOKE_SINGLE_THREADED);
stringValue = messageValue.toString();
}
However, it sometime does not work. For example, Given the following code:
1. public static void main(String[] args) {
2. InsertIntervalBug6 insert = new InsertIntervalBug6();
3.
4. Interval i1 = new Interval(1, 2);
5. Interval i2 = new Interval(3, 4);
6.
7. }
It works fine in line 4, I can successfully get the result by invoking toString() method of insert variable. However, when in line 5, a TimeOutException is reported. However, I have already set the timeout option when starting JVM at 10s, therefore I think such a period is long enough to retrieve the result of toString() method invocation. The trace stack is as follows. Do you have any idea about the problem? Thanks!
org.eclipse.jdi.TimeoutException: Timeout occurred while waiting for packet 586.
at org.eclipse.jdi.internal.connect.PacketReceiveManager.getReply(PacketReceiveManager.java:186)
at org.eclipse.jdi.internal.connect.PacketReceiveManager.getReply(PacketReceiveManager.java:197)
at org.eclipse.jdi.internal.MirrorImpl.requestVM(MirrorImpl.java:191)
at org.eclipse.jdi.internal.MirrorImpl.requestVM(MirrorImpl.java:226)
at org.eclipse.jdi.internal.ObjectReferenceImpl.invokeMethod(ObjectReferenceImpl.java:428)
at microbat.codeanalysis.runtime.variable.VariableValueExtractor.setMessageValue(VariableValueExtractor.java:518)

I have solved this problem by myself. I share the solution in this answer as follows:
The TimeoutException is caused by deadlock. When I am visiting the toString() method, it triggers a step request to JVM. However, my program is listening to any step request sent from the debugged program so that it is able to capture the stepping event and suspend the program for checking variable values. Therefore, a programmatical invocation of the toString() method suspend the program itself, the invokeMethod() wait the suspended program until the time output.
The solution is to disable the set step request. Afterward, the deadline lock problem disappears.

Related

Creating an async method which throws an exception after a specified amount of time unless a certain condition is met outside of that function

I am currently working on a Ruby script which is supposed to perform different tasks on a pretty long list of hosts. I am using the net-ssh gem for connectivity with those hosts. The thing is, there seem to exist some conditions under which net-ssh times out without throwing an exception. As of know, the script was only once able to finish a run. Most of the time, the scripts just hangs at some point without ever throwing an exception or doing anything.
I thought about running all tasks that may timeout in different threads, passing them a pointer to some variable they can change when the tasks finished successfully, and then check that variable for a given amount of time. If the task has not finished by then, throw an exception in the main thread that I can catch somewhere.
This is the first time I am writing something in Ruby. To give a clear demonstration of what I want to accomplish, this is what I'd do in C++:
void perform_long_running_task(bool* finished);
void start_task_and_throw_on_timeout(int secs, std::function<void(bool*)> func);
int seconds_to_wait {5};
int seconds_task_takes{6};
int main() {
start_task_and_throw_on_timeout(seconds_to_wait, &perform_long_running_task);
// do other stuff
return 0;
}
void perform_long_running_task(bool* finished){
// Do something that may possible timeout..
std::this_thread::sleep_for(std::chrono::seconds(seconds_task_takes));
// Finished..
*finished = true;
}
void start_task_and_throw_on_timeout(int secs, std::function<void(bool*)> func){
bool finished {false};
std::thread task(func, &finished);
while (secs > 0){
std::this_thread::sleep_for(std::chrono::seconds(1));
secs--;
if (finished){
task.join();
return;
}
}
throw std::exception();
}
Here, when 'seconds_task_takes' is bigger than 'seconds_to_wait', an exception is thrown in the main thread. If the task finishes in time, everything goes on smoothly.
However, I need to write my piece of software in a dynamic scripting language that can run anywhere and needs not to be compiled. I would be super glad for any advice about how I could write something like the code above in Ruby.
Thanks alot in advance :)
edit: in the example ,I added a std::function parameter to start_task_and_throw_timeout so it's reusable for all similar functions
I think module timeout has everything you need to do. It allows you to run the block for a while and raise an exception if it was not fast enough.
Here is a code example:
require "timeout"
def run(name)
puts "Running the job #{name}"
sleep(10)
end
begin
Timeout::timeout(5) { run("hard") }
rescue Timeout::Error
puts "Failed!"
end
You can play with it here: https://repl.it/repls/CraftyUnluckyCore. The documentation for the module lives here: https://ruby-doc.org/stdlib-2.5.1/libdoc/timeout/rdoc/Timeout.html. Notice that you can customize not only the timeout, but also error class and message, so different jobs may have different kinds of errors.

Batch processing flowfiles in apache nifi

I have written custom nifi processor which tries to batch process input flow files.
However, it seems it is not behaving as expected. Here is what happening:
I copy paste some files on server. FethFromServerProcessor fetches those files from server and puts it in queue1. MyCustomProcessor reads files in batch from queue1. I have batchSize property defined on MyCustomProcessor and inside its onTrigger() method, I am getting all flow files from queue1 in current batch by doing following:
session.get(context.getProperty(batchSize).asInteger())
First line of onTrigger() creates timestamp and adds this timestamp on all flow files. So all files in the batch should have same timestamp. However, that is not happening. Usually first flow file get one timestamp and rest of the flow files get other timestamp.
It seems that when FetchFromServerProcessor fetches first file from server and puts it in the queue1, MyCustomProcessor gets triggered and it fetches all files from queue. Incidentally, it happens that there used to be single file, which is picked up as only file in this batch. By the time MyCustomProcessor has processed this file, FetchFromServerProcessor has fetched all the files from server and put them in the queue1. So after processing first file, MyCustomProcessor takes all the files in queue1 and forms second batch, whereas I want all files picked up in single batch.
How can I avoid two batches getting formed? I see that people discuss wait-notify in this context:1, 2. But I am not able to make quick sense out of these posts. Can someone give me minimal steps to achieve this using wait notify processors or can someone point me to minimal tutorial which gives step by step procedure to use wait-notify processors? Also is wait-notify pattern standard approach to solve batch related problem I explained? Or is there any other standard approach to get this done?
It sounds as if this batch size is the required count of incoming flowfiles to CustomProcessor, so why not write your CustomProcessor#onTrigger() as follows:
#Override
public void onTrigger(ProcessContext context, ProcessSession session) throws ProcessException {
final ComponentLog logger = getLogger();
// Try to get n flowfiles from incoming queue
final Integer desiredFlowfileCount = context.getProperty(batchSize).asInteger();
final int queuedFlowfileCount = session.getQueueSize().getObjectCount();
if (queuedFlowfileCount < desiredFlowfileCount) {
// There are not yet n flowfiles queued up, so don't try to run again immediately
if (logger.isDebugEnabled()) {
logger.debug("Only {} flowfiles queued; waiting for {}", new Object[]{queuedFlowfileCount, desiredFlowfileCount});
}
context.yield();
return;
}
// If we're here, we do have at least n queued flowfiles
List<FlowFile> flowfiles = session.get(desiredFlowfileCount);
try {
// TODO: Perform work on all flowfiles
flowfiles = flowfiles.stream().map(f -> session.putAttribute(f, "timestamp", "my static timestamp value")).collect(Collectors.toList());
session.transfer(flowfiles, REL_SUCCESS);
// If extending AbstractProcessor, this is handled for you and you don't have to explicitly commit
session.commit();
} catch (Exception e) {
logger.error("Helpful error message");
if (logger.isDebugEnabled()) {
logger.error("Further stacktrace: ", e);
}
// Penalize the flowfiles if appropriate (also done for you if extending AbstractProcessor and an exception is thrown from this method
session.rollback(true);
// --- OR ---
// Transfer to failure if they can't be retried
session.transfer(flowfiles, REL_FAILURE);
}
}
The Java 8 stream syntax can be replaced by this if it's unfamiliar:
for (int i = 0; i < flowfiles.size(); i++) {
// Write the same timestamp value onto all flowfiles
FlowFile f = flowfiles.get(i);
flowfiles.set(i, session.putAttribute(f, "timestamp", "my timestamp value"));
}
The semantics between penalization (telling the processor to delay performing work on a specific flowfile) and yielding (telling the processor to wait some period of time to try performing any work again) are important.
You probably also want the #TriggerSerially annotation on your custom processor to ensure you do not have multiple threads running such that a race condition could arise.

Why do we need exception handling?

I can check for the input and if it's an invalid input from the user, I can use a simple "if condition" which prints "input invalid, please re-enter" (in case there is an invalid input).
This approach of "if there is a potential for a failure, verify it using an if condition and then specify the right behavior when failure is encountered..." seems enough for me.
If I can basically cover any kind of failure (divide by zero, etc.) with this approach, why do I need this whole exception handling mechanism (exception class and objects, checked and unchecked, etc.)?
Suppose you have func1 calling func2 with some input.
Now, suppose func2 fails for some reason.
Your suggestion is to handle the failure within func2, and then return to func1.
How will func1 "know" what error (if any) has occurred in func2 and how to proceed from that point?
The first solution that comes to mind is an error-code that func2 will return, where typically, a zero value will represent "OK", and each of the other (non-zero) values will represent a specific error that has occurred.
The problem with this mechanism is that it limits your flexibility in adding / handling new error-codes.
With the exception mechanism, you have a generic Exception object, which can be extended to any specific type of exception. In a way, it is similar to an error-code, but it can contain more information (for example, an error-message string).
You can still argue of course, "well, what's the try/catch for then? why not simply return this object?".
Fortunately, this question has already been answered here in great detail:
In C++ what are the benefits of using exceptions and try / catch instead of just returning an error code?
In general, there are two main advantages for exceptions over error-codes, both of which are different aspects of correct coding:
With an exception, the programmer must either handle it or throw it "upwards", whereas with an error-code, the programmer can mistakenly ignore it.
With the exception mechanism you can write your code much "cleaner" and have everything "automatically handled", wheres with error-codes you are obliged to implement a "tedious" switch/case, possibly in every function "up the call-stack".
Exceptions are a more object-oriented approach to handling exceptional execution flows than return codes. The drawback of return codes is that you have to come up with 'special' values to indicate different types of exceptional results, for example:
public double calculatePercentage(int a, int b) {
if (b == 0) {
return -1;
}
else {
return 100.0 * (a / b);
}
}
The above method uses a return code of -1 to indicate failure (because it cannot divide by zero). This would work, but your calling code needs to know about this convention, for example this could happen:
public double addPercentages(int a, int b, int c, int d) {
double percentage1 = calculatePercentage(a, b);
double percentage2 = calculatePercentage(c, c);
return percentage1 + percentage2;
}
Above code looks fine at first glance. But when b or d are zero the result will be unexpected. calculatePercentage will return -1 and add it to the other percentage which is likely not correct. The programmer who wrote addPercentages is unaware that there is a bug in this code until he tests it, and even then only if he really checks the validity of the results.
With exceptions you could do this:
public double calculatePercentage(int a, int b) {
if (b == 0) {
throw new IllegalArgumentException("Second argument cannot be zero");
}
else {
return 100.0 * (a / b);
}
}
Code calling this method will compile without exception handling, but it will stop when run with incorrect values. This is often the preferred way since it leaves it up to the programmer if and where to handle exceptions.
If you want to force the programmer to handle this exception you should use a checked exception, for example:
public double calculatePercentage(int a, int b) throws MyCheckedCalculationException {
if (b == 0) {
throw new MyCheckedCalculationException("Second argument cannot be zero");
}
else {
return 100.0 * (a / b);
}
}
Notice that calculatePercentage has to declare the exception in its method signature. Checked exceptions have to be declared like that, and the calling code either has to catch them or declare them in their own method signature.
I think many Java developers currently agree that checked exceptions are bit invasive so most API's lately gravitate towards the use of unchecked exceptions.
The checked exception above could be defined like this:
public class MyCheckedCalculationException extends Exception {
public MyCalculationException(String message) {
super(message);
}
}
Creating a custom exception type like that makes sense if you are developing a component with multiple classes and methods which are used by several other components and you want to make your API (including exception handling) very clear.
(see the Throwable class hierarchy)
Let's assume that you need to write some code for some object, which consists of n different resources (n > 3) to be allocated in the constructor and deallocated inside the destructor.
Let's even say, that some of these resources depend on each other.
E.g. in order to create an memory map of some file one would first have to successfully open the file and then perform the OS function for memory mapping.
Without exception handling you would not be able to use the constructor(s) to allocate these resources but you would likely use two-step-initialization.
You would have to take care about order of construction and destruction yourself
-- since you're not using the constructor anymore.
Without exception handling you would not be able to return rich error information to the caller -- this is why in exception free software one usually needs a debugger and debug executable to identify why some complex piece of software is suddenly failing.
This again assumes, that not every library is able to simply dump it's error information to stderr. stderr is in certain cases not available, which in turn makes all code which is using stderr for error reporting not useable.
Using C++ Exception Handling you would simply chain the classes wrapping the matching system calls into base or member class relationships AND the compiler would take care about order of construction and destruction and to only call destructors for not failed constructors.
To start with, methods are generally the block of codes or statements in a program that gives the user the ability to reuse the same code which is ultimately the saving on the excessive use of memory. This means that there is now no wastage of memory on the computer.

Unit Test Only Passes in Debug Mode, Fails in Run Mode

I have the following UnitTest:
[TestMethod]
public void NewGamesHaveDifferentSecretCodesTothePreviousGame()
{
var theGame = new BullsAndCows();
List<int> firstCode = new List<int>(theGame.SecretCode);
theGame.NewGame();
List<int> secondCode = new List<int>(theGame.SecretCode);
theGame.NewGame();
List<int> thirdCode = new List<int>(theGame.SecretCode);
CollectionAssert.AreNotEqual(firstCode, secondCode);
CollectionAssert.AreNotEqual(secondCode, thirdCode);
}
When I run it in Debug mode, my code passes the test, but when I run the test as normal (run mode) it does not pass. The exception thrown is:
CollectionAssert.AreNotEqual failed. (Both collection contain same elements).
Here is my code:
// constructor
public BullsAndCows()
{
Gueses = new List<Guess>();
SecretCode = generateRequiredSecretCode();
previousCodes = new Dictionary<int, List<int>>();
}
public void NewGame()
{
var theCode = generateRequiredSecretCode();
if (previousCodes.Count != 0)
{
if(!isPreviouslySeen(theCode))
{
SecretCode = theCode;
previousCodes.Add(previousCodes.Last().Key + 1, SecretCode);
}
}
else
{
SecretCode = theCode;
previousCodes.Add(0, theCode);
}
}
previousCodes is a property on the class, and its Data type is Dictionary key integer, value List of integers. SecretCode is also a property on the class, and its Data type is a List of integers
If I were to make a guess, I would say the reason is the NewGame() method is called again, whilst the first call hasn't really finished what it needs to do. As you can see, there are other methods being called from within the NewGame() method (e.g. generateRequiredSecretCode()).
When running in Debug mode, the slow pace of my pressing F10 gives sufficient time for processes to end.
But I am not really sure how to fix that, assuming I am right in my identification of the cause.
What happens to SecretCode when generateRequiredSecretCode generates a duplicate? It appears to be unhandled.
One possibility is that you are getting a duplicate, so SecretCode remain the same as its previous value. How does the generator work?
Also, you didn't show how the BullsAndCows constructor is initializing SecretCode? Is it calling NewGame?
I doubt the speed of keypresses has anything to do with it, since your test method calls the functions in turn without waiting for input. And unless generateReq... is spawning a thread, it will complete whatever it is doing before it returns.
--after update--
I see 2 bugs.
1) The very first SecretCode generated in the constructor is not added to the list of previousCodes. So the duplicate checking won't catch if the 2nd game has the same code.
2) after previousCodes is populated, you don't handle the case where you generate a duplicate. a duplicate is previouslySeen, so you don't add it to the previousCodes list, but you don't update SecretCode either, so it keeps the old value.
I'm not exactly sure why this is only showing up in release mode - but it could be a difference in the way debug mode handles the random number generator. See How to randomize in WPF. Release mode is faster, so it uses the same timestamp as seed, so it does in fact generate exactly the same sequence of digits.
If that's the case, you can fix it by making random a class property instead of creating a new one for each call to generator.

Best practice for incorrect parameters on a remove method

So I have an abstract data type called RegionModel with a series of values (Region), each mapped to an index. It's possible to remove a number of regions by calling:
regionModel.removeRegions(index, numberOfRegionsToRemove);
My question is what's the best way to handle a call when the index is valid :
(between 0 (inclusive) and the number of Regions in the model (exclusive))
but the numberOfRegionsToRemove is invalid:
(index + regionsToRemove > the number of regions in the model)
Is it best to throw an exception like IllegalArgumentException or just to remove as many Regions as I can (all the regions from index to the end of the model)?
Sub-question: if I throw an exception what's the recommended way to unit test that the call threw the exception and left the model untouched (I'm using Java and JUnit here but I guess this isn't a Java specific question).
Typically, for structures like this, you have a remove method which takes an index and if that index is outside the bounds of the items in the structure, an exception is thrown.
That being said, you should be consistent with whatever that remove method that takes a single index does. If it simply ignores incorrect indexes, then ignore it if your range exceeds (or even starts before) the indexes of the items in your structure.
I agree with Mitchel and casperOne -- an Exception makes the most sense.
As far as unit testing is concerned, JUnit4 allows you to exceptions directly:
http://www.ibm.com/developerworks/java/library/j-junit4.html
You would need only to pass parameters which are guaranteed to cause the exception, and add the correct annotation (#Test(expected=IllegalArgumentException.class)) to the JUnit test method.
Edit: As Tom Martin mentioned, JUnit 4 is a decent-sized step away from JUnit 3. It is, however, possible to also test exceptions using JUnit 3. It's just not as easy.
One of the ways I've tested exceptions is by using a try/catch block within the class itself, and embedding Assert statements within it.
Here's a simple example -- it's not complete (e.g. regionModel is assumed to be instantiated), but it should get the idea across:
public void testRemoveRegionsInvalidInputs() {
int originalSize = regionModel.length();
int index = 0;
int numberOfRegionsToRemove = 1,000; // > than regionModel's current size
try {
regionModel.removeRegions(index, numberOfRegionsToRemove);
// Since the exception will immediately go into the 'catch' block this code will only run if the IllegalArgumentException wasn't thrown
Assert.assertTrue("Exception not Thrown!", false);
}
catch (IllegalArgumentException e) {
Assert.assertTrue("Exception thrown, but regionModel was modified", regionModel.length() == originalSize);
}
catch (Exception e) {
Assert.assertTrue("Incorrect exception thrown", false);
}
}
I would say that an argument such as illegalArgumentException would be the best way to go here. If the calling code was not passing a workable value, you wouldn't necessarily want to trust that they really wanted to remove what it had them do.

Resources