Stack Overflow at java.util.Properties.getProperty - stack-overflow

When running my application, after 40000 transactions getting stack overflow exception at java.util.Properties.getProperty.
Please find below the stack error ..
java.lang.StackOverflowError
at java.util.Hashtable.get(Hashtable.java:334)
at java.util.Properties.getProperty(Properties.java:932)
at java.util.Properties.getProperty(Properties.java:934)
... 80,0000 times
at java.util.Properties.getProperty(Properties.java:934)
at java.lang.System.getProperty(System.java:653)
at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:67)
at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:32)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.PrintWriter.<init>(PrintWriter.java:78)
at java.io.PrintWriter.<init>(PrintWriter.java:62)
at java.util.logging.SimpleFormatter.format(SimpleFormatter.java:71)
at org.apache.juli.FileHandler.publish(FileHandler.java:133)
at java.util.logging.Logger.log(Logger.java:481)
at java.util.logging.Logger.doLog(Logger.java:503)
at java.util.logging.Logger.logp(Logger.java:703)
at org.apache.commons.logging.impl.Jdk14Logger.log(Jdk14Logger.java:101)
at org.apache.commons.logging.impl.Jdk14Logger.error(Jdk14Logger.java:149)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:253)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Thread.java:662)
I'm not able to trace the from where the stack error is thrown.

Don't mean to state the obvious, but maybe you should check if you are creating variables inside the loop that performs these transactions (the 40.000 repetitions one :-)), instead of reusing the same variable. Also, checking the constraint inside the loop can create unnecessary load. String concatenation can produce massive load, too. So, if you have something like this:
for (i=0; i < getNumberOfTransactions(); i++){ // constraint checking inside the loop
int currentValue = myTransaction.getSomeData(); // creating new variable in every
}
You should write something like this instead:
int numberOfTransactions = getNumberOfTransactions();
int currentValue = 0;
for(i=0; i < numberOfTransactions; i++){
currentValue = myTransaction.getSomeData();
}
Although this example uses only 2 integer variables, when multiple variables are created inside the loop (especially string concatenation), this can consume a lot of memory. If you have some string concatenation, use StringBuilder class instead.

The simple answer is that your test case is running out of memory. A
possible solution would be to increase the memory of the JVM. I
believe the default memory setting for the JVM is 256M or maybe 512M.
This should be sufficient for your test execution. Short term solution
is to use -Xmx1700m (assuming you have the 1.7 gig of memory to
spare).

Related

update aTable set a,b,c = func(x,y,z,…)

I need a quick advice how-to. I mention that the following scenario is based on the use of c_api available already to my monetdblite compilation on 64bit, intention is to use it with some adhoc C written functions.
Short: how can I achieve or simulate the following scenario:
update aTable set a,b,c = func(x,y,z,…)
Long. Many algorithms are returning more than one variable as, for instance, multiple regression.
bool m_regression(IN const double **data, IN const int cols, IN const int rows, OUT double *fit_values, OUT double *residuals, OUT double *std_residuals, OUT double &p_value);
In order to minimize the transfer of data between monetdb and heavy computational function, all those results are generated in one step. Question is how can I transfer them back at once, minimizing computational time and memory traffic between monetdb and external C/C++(/R/Python) function?
My first thought to solve this is something like this:
1. update aTable set dummy = func_compute(x,y,z,…)
where dummy is a temporary __int64 field and func_compute will compute all the necessary outputs and store the result into a dummy pointer. To make sure is no issue with constant estimation, first returned value in the array will be the real dummy pointer, the rest just an incremented value of dummy + i;
2. update aTable set a = func_ret(dummy, 1), b= func_ret (dummy, 2), c= func_ret (dummy, 3) [, dummy=func_free(dummy)];
Assuming the func_ret will get the dummy in the same order that it was returned on first call, I would just copy the prepared result into provided storage; In case the order is not preserved, I will need an extra step to get the minimum (real dummy pointer), then to use the offset of current value to lookup in my array.
__int64 real_dummy = __inputs[0][0];
double *my_pointer_data = (double *) (real_dummy + __inputs[1][0] * sizeof(double)* row_count);
memcpy(__outputs[0], my_pointer_data, sizeof(double)* row_count);
// or ============================
__int64 real_dummy = minimum(__inputs[0]);
double *my_pointer_data = (double *) (real_dummy + __inputs[0][1] * sizeof(double)* row_count);
for (int i=0;i<row_count;i++)
__outputs[0][i] = my_pointer_data[__inputs[0][i] - real_dummy];
It is less relevant how am I going to free the temporary memory, can be in the last statement in update or in a new fake update statement using func_free.
Problem is that it doesn’t look to me that, even if I save some computational (big) time, the passing of the dummy is still done 3 times (any chance that memory is actually not copied?).
Is it any other better way of achieving this?
I am not aware of a good way of doing this, sorry. You could retrieve the table, add your columns as BATs in whichever way you like and write it back.

Should I expect to see the counter in `for` loop changed inside its body? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm reading someone else's code and they separately increment their for loop counter inside the loop, as well as including the usual afterthought. For example:
for( int y = 4; y < 12; y++ ) {
// blah
if( var < othervar ) {
y++;
}
// blah
}
Based on the majority of code others have written and read, should I be expecting to see this?
The practice of manipulating the loop counter within a for loop is not exactly widespread. It would surprise many of the people reading that code. And surprising your readers is rarely a good idea.
The additional manipulation of your loop counter adds a ton of complexity to your code because you have to keep in mind what it means and how it affects the overall behavior of the loop. As Arkady mentioned, it makes your code much harder to maintain.
To put it simply, avoid this pattern. When you follow "clean code" principles, especially the single layer of abstraction (SLA) principle, there is no such thing as
for(something)
if (somethingElse)
y++
Following the principle requires you to move that if block into its own method, making it awkward to manipulate some outer counter within that method.
But beyond that, there might be situations where "something" like your example makes; but for those cases - why not use a while loop then?
In other words: the thing that makes your example complicated and confusing is the fact that two different parts of the code change your loop counter. So another approach could look like:
while (y < whatever) {
...
y = determineY(y, who, knows);
}
That new method could then be the central place to figure how to update the loop variable.
I beg to differ with the acclaimed answer above. There is nothing wrong with manipulating loop control variable inside the loop body. For example, here is the classical example of cleaning up the map:
for (auto it = map.begin(), e = map.end(); it != e; ) {
if (it->second == 10)
it = map.erase(it);
else
++it;
}
Since I have been rightfully pointed out to the fact that iterators are not the same as numeric control variable, let's consider an example of parsing the string. Let's assume the string consists of a series of characters, where characters prefixed with '\' are considered to be special and need to be skipped:
for (size_t i = 0; i < s_len; ++i) {
if (s[i] == '\\') {
++i;
continue;
}
process_symbol(s[i]);
}
Use a while loop instead.
While you can do this with a for loop, you should not. Remember that a program is like any other piece of communication, and must be done with your audience in mind. For a program, the audience includes the compiler and the next person to do maintenance on the code (likely you in about 6 months).
To the compiler, the code is taken very literally -- set up a index variable, run the loop body, execute the increment, then check the condition to see if you are looping again. The compiler doesn't care if you monkey with the loop index.
To a person however, a for loop has a specific implied meaning: Run this loop a fixed number of times. If you monkey with the loop index, then this violates the implication. It's dishonest in a sense, and it matters because the next person to read the code will either have to spend extra effort to understand the loop, or will fail to do so and will therefore fail to understand.
If you want to monkey with the loop index, use a while loop. Especially in C/C++/related languages, a for loop is exactly as powerful as a while loop, so you never lose any power or expressiveness. Any for loop can be converted to a while loop and vice versa. However, the next person who reads it won't depend on the implication that you don't monkey with the loop index. Making it a while loop instead of a for loop is a warning that this kind of loop may be more complicated, and in your case, it is in fact more complicated.
If you increment inside the loop, make sure to comment it. A canonical example (based on a Scott Meyers Effective C++ item) is given in the Q&A How to remove from a map while iterating it? (verbatim code copy)
for (auto it = m.cbegin(); it != m.cend() /* not hoisted */; /* no increment */)
{
if (must_delete)
{
m.erase(it++); // or "it = m.erase(it)" since C++11
}
else
{
++it;
}
}
Here, both the non-constant nature of the end() iterator and the increment inside the loop are surprising, so they need to be documented. Note: the loop hoisting here is after all possible so probably should be done for code clarity.
For what it's worth, here is what the C++ Core Guidelines has to say on the subject:
http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Res-loop-counter
ES.86: Avoid modifying loop control variables inside the body of raw
for-loops
Reason The loop control up front should enable correct
reasoning about what is happening inside the loop. Modifying loop
counters in both the iteration-expression and inside the body of the
loop is a perennial source of surprises and bugs.
Also note that in the other answers here that discuss the case with std::map, the increment of the control variable is still only done once per iteration, where in your example, it can be done more than once per iteration.
So after the some confusion, i.e. close, reopen, question body update, title update, I think the question is finally clear. And also no longer opinion based.
As I understand it the question is:
When I look at code written by others, should I be expecting to see "loop condition variable" being changed in the loop body ?
The answer to this is a clear:
yes
When you work with others code - regardless of whether you do a review, fix a bug, add a new feature - you shall expect the worst.
Everything that are valid within the language is to be expected.
Don't make any assumptions about the code being in acordance with any good practice.
It's really better to write as a while loop
y = 4;
while(y < 12)
{
/* body */
if(condition)
y++;
y++;
}
You can sometimes separate out the loop logic from the body
while(y < 12)
{
/* body */
y += condition ? 2 : 1;
}
I would allow the for() method if and only if you rarely "skip" an item,
like escapes in a quoted string.

C-API: Allocating "PyTypeObject-extension"

I have found some code in PyCXX that may be buggy.
Is it indeed a bug, and if so, what is the right way to fix it?
Here is the problem:
struct PythonClassInstance
{
PyObject_HEAD
ExtObjBase* m_pycxx_object;
}
:
{
:
table->tp_new = extension_object_new; // PyTypeObject
:
}
:
static PyObject* extension_object_new(
PyTypeObject* subtype, PyObject* args, PyObject* kwds )
{
PythonClassInstance* o = reinterpret_cast<PythonClassInstance *>
( subtype->tp_alloc(subtype,0) );
if( ! o )
return nullptr;
o->m_pycxx_object = nullptr;
PyObject* self = reinterpret_cast<PyObject* >( o );
return self;
}
Now PyObject_HEAD expands to "PyObject ob_base;", so clearly PythonClassInstance trivially extends PyObject to contain an extra pointer (which will point to PyCXX's representation for this PyObject)
tp_alloc allocates memory for storing a PyObject
The code then typecasts this pointer to a PythonClassInstance, laying claim to an extra 4(or 8?) bytes that it does not own!
And then it sets this extra memory to 0.
This looks very dangerous, and I'm surprised the bug has gone unnoticed. The risk is that some future object will get placed in this location (that is meant to be storing the ExtObjBase*).
How to fix it?
PythonClassInstance foo{};
PyObject* tmp = subtype->tp_alloc(subtype,0);
// !!! memcpy sizeof(PyObject) bytes starting from location tmp into location (void*)foo
But I think now maybe I need to release tmp, and I don't think I should be playing with memory directly like this. I feel like it could be jeopardising Python's memory management/garbage collection inbuilt machinery.
The other option is maybe I can persuade tp_alloc to allocate 4 extra bytes (or is it 8 now; enough for a pointer) bypassing in 1 instead of 0.
Documentation says this second parameter is "Py_ssize_t nitems" and:
If the type’s tp_itemsize is non-zero, the object’s ob_size field
should be initialized to nitems and the length of the allocated memory
block should be tp_basicsize + nitemstp_itemsize, rounded up to a
multiple of sizeof(void); otherwise, nitems is not used and the
length of the block should be tp_basicsize.
So it looks like I should be setting:
table->tp_itemsize = sizeof(void*);
:
PyObject* tmp = subtype->tp_alloc(subtype,1);
EDIT: just tried this and it causes a crash
But then the documentation goes on to say:
Do not use this function to do any other instance initialization, not
even to allocate additional memory; that should be done by tp_new.
Now I'm not sure whether this code belongs in tp_new or tp_init.
Related:
Passing arguments to tp_new and tp_init from subtypes in Python C API
Python C-API Object Allocation‏
The code is correct.
As long as the PyTypeObject for the extension object is properly initialized it should work.
The base class tp_alloc receives subtype so it should know how much memory to allocate by checking the tp_basicsize member.
This is a common Python C/API pattern as demonstrated int the tutorial.
Actually this is a (minor/harmless) bug in PyCXX
SO would like to convert this answer to a comment, which makes no sense I can't awarded the green tick of completion so I comment. So I have to ramble in order to qualify it. blerh.

Is it possible to inject values in the frama-c value analyzer?

I'm experimenting with the frama-c value analyzer to evaluate C-Code, which is actually threaded.
I want to ignore any threading problems that might occur und just inspect the possible values for a single thread. So far this works by setting the entry point to where the thread starts.
Now to my problem: Inside one thread I read values that are written by another thread, because frama-c does not (and should not?) consider threading (currently) it assumes my variable is in some broad range, but I know that the range is in fact much smaller.
Is it possible to tell the value analyzer the value range of this variable?
Example:
volatile int x = 0;
void f() {
while(x==0)
sleep(100);
...
}
Here frama-c detects that x is volatile and thus has range [--..--], but I know what the other thread will write into x, and I want to tell the analyzer that x can only be 0 or 1.
Is this possible with frama-c, especially in the gui?
Thanks in advance
Christian
This is currently not possible automatically. The value analysis considers that volatile variables always contain the full range of values included in their underlying type. There however exists a proprietary plug-in that transforms accesses to volatile variables into calls to user-supplied function. In your case, your code would be transformed into essentially this:
int x = 0;
void f() {
while(1) {
x = f_volatile_x();
if (x == 0)
sleep(100);
...
}
By specifying f_volatile_x correctly, you can ensure it returns values between 0 and 1 only.
If the variable 'x' is not modified in the thread you are studying, you could also initialize it at the beginning of the 'main' function with :
x = Frama_C_interval (0, 1);
This is a function defined by Frama-C in ...../share/frama-c/builtin.c so you have to add this file to your inputs when you use it.

C/OpenMP - issue with threadprivate and vectors of pointers

I'm new to the world of parallel programming and openmp, so this may be a futile question, but I can't really come up with good answer to what I'm experiencing, so I hope someone will be able to shed some light on the matter.
What I am trying to achieve is to have a private copy of a dinamically allocated matrix (of integers) for every thread that will handle the following parallel section, but as soon as the flow of execution enters said region the reference to the supposedly private matrix holds a null value.
Is there any limitation of this directive I'm not aware of? Everything seems to work just fine with monodimensional dynamic arrays.
A snippet of the code is the following one...
#define n 10000
int **matrix;
#pragma omp threadprivate(matrix)
int main()
{
matrix = (int**) calloc(n, sizeof(int*));
for(i=0;i<n;i++) matrix[i] = (int*) calloc(n, sizeof(int));
AdjacencyMatrix(n, matrix);
...
/* Explicitly turn off dynamic threads */
omp_set_dynamic(0);
#pragma omp parallel
{
// From now on, matrix is NULL...
executor_p(matrix, n);
}
....
Look at the OpenMP documentation regarding what happens with the threadprivate clause:
On first entry to a parallel region, data in THREADPRIVATE variables and common blocks should be assumed undefined, unless a COPYIN clause is specified in the PARALLEL directive
There's no guarantee of what value is going to be stored in the matrix variable in the parallel region.
OpenMP can privatise only variables with known storage size. That is you can have a private copy of an array if it was defined like double matrix[N][M]. In your case is not only the storage size unknown (a pointer doesn't store the number of elements that it is pointing to) but also your matrix is not a contiguous area in memory and rather a pointer to a list of dynamically allocated rows.
What you would end up with is having a private copy of the top-level pointer, not a private copy of the matrix data itself.

Resources