Synchronized Block takes more time after instrumenting with ASM - bytecode

I am trying to instrument java synchronized block using ASM. The problem is that after instrumenting, the execution time of the synchronized block takes more time. Here it increases from 2 msecs to 200 msecs on Linux box.
I am implementing this by identifying the MonitorEnter and MonitorExit opcode.
I try to instrument at three level 1. just before the MonitorEnter 2. after MonitorEnter 3. Before MonitorExit.
1 and 3 together works fine, but when i do 2, the execution time increase dramatically.
Even if we instrument another single SOP statement, which is intended to be executed just once, it give higher values.
Here the sample code (prime number, 10 loops):
for(int w=0;w<10;w++){
synchronized(s){
long t1 = System.currentTimeMillis();
long num = 2000;
for (long i = 1; i < num; i++) {
long p = i;
int j;
for (j = 2; j < p; j++) {
long n = p % i;
}
}
long t2 = System.currentTimeMillis();
System.out.println("Time>>>>>>>>>>>> " + (t2-t1) );
}
Here the code for instrumention (here System.currentMilliSeconds() gives the time at which instrumention happened, its no the measure of execution time, the excecution time is from obove SOP statement):
public void visitInsn(int opcode)
{
switch(opcode)
{
// Scenario 1
case 194:
visitFieldInsn(Opcodes.GETSTATIC, "java/lang/System", "out", "Ljava/io /PrintStream;");
visitLdcInsn("TIME Arrive: "+System.currentTimeMillis());
visitMethodInsn(Opcodes.INVOKEVIRTUAL, "java/io/PrintStream", "println", "(Ljava/lang/String;)V");
break;
// scenario 3
case 195:
visitFieldInsn(Opcodes.GETSTATIC, "java/lang/System", "out", "Ljava/io/PrintStream;");
visitLdcInsn("TIME exit : "+System.currentTimeMillis());
visitMethodInsn(Opcodes.INVOKEVIRTUAL, "java/io/PrintStream", "println", "(Ljava/lang/String;)V");
break;
}
super.visitInsn(opcode);
// scenario 2
if(opcode==194)
{
visitFieldInsn(Opcodes.GETSTATIC, "java/lang/System", "out", "Ljava/io/PrintStream;");
visitLdcInsn("TIME enter: "+System.currentTimeMillis());
visitMethodInsn(Opcodes.INVOKEVIRTUAL, "java/io/PrintStream", "println", "(Ljava/lang/String;)V");
}
}
I am not able to find the reason why it is happening and how t correct it.
Thanks in advance.

The reason lies in the internals of the JVM that you were using for running the code. I assume that this was a HotSpot JVM but the answers below are equally right for most other implementations.
If you trigger the following code:
int result = 0;
for(int i = 0; i < 1000; i++) {
result += i;
}
This will be translated directly into Java byte code by the Java compiler but at run time the JVM will easily see that this code is not doing anything. Executing this code will have no effect on the outside (application) world, so why should the JVM execute it? This consideration is exactly what compiler optimization does for you.
If you however trigger the following code:
int result = 0;
for(int i = 0; i < 1000; i++) {
System.out.println(result);
}
the Java runtime cannot optimize away your code anymore. The whole loop must always run since the System.out.println(int) method is always doing something real such that your code will run slower.
Now let's look at your example. In your first example, you basically write this code:
synchronized(s) {
// do nothing useful
}
This entire code block can easily be removed by the Java run time. This means: There will be no synchronization! In the second example, you are writing this instead:
synchronized(s) {
long t1 = System.currentTimeMillis();
// do nothing useful
long t2 = System.currentTimeMillis();
System.out.println("Time>>>>>>>>>>>> " + (t2-t1));
}
This means that the effective code might be look like this:
synchronized(s) {
long t1 = System.currentTimeMillis();
long t2 = System.currentTimeMillis();
System.out.println("Time>>>>>>>>>>>> " + (t2-t1));
}
What is important here is that this optimized code will be effectively synchronized what is an important difference with respect to execution time. Basically, you are measuring the time it costs to synchronize something (and even that might be optimized away after a couple of runs if the JVM realized that the s is not locked elsewhere in your code (buzzword: temporary optimization with the possibility of deoptimization if loaded code in the future will also synchronize on s).
You should really read this:
http://www.ibm.com/developerworks/java/library/j-jtp02225/
http://www.ibm.com/developerworks/library/j-jtp12214/
Your test for example misses a warm-up, such that you are also measuring how much time the JVM will use for byte code to machine code optimization.
On a side note: Synchronizing on a String is almost always a bad idea. Your strings might be or might not be interned what means that you cannot be absolutely sure about their identity. This means, that synchronization might or might not work and you might even inflict synchronization of other parts of your code.

Related

Why does clock() returns -1 in C

I'm trying to implement an error handler using the clock() function from the "time.h" library. The code runs inside an embeeded system (Colibri IMX7 - M4 Processor). The function is used to monitor a current value within a specific range, if the value of the current isn't correct the function should return an error message.
The function will see if the error is ocurring and in the first run it will save the first appearance of the error in a clock_t as reference, and then in the next runs if the error is still there, it will compare the current time using clock() with the previous reference and see if it will be longer than a specific time.
The problem is that the function clock() is always returning -1. What should I do to avoid that? Also, why can't I declare a clock_t variable as static (e.g. static clock_t start_t = clock()?
Please see below the function:
bool CrossLink_check_error_LED_UV_current_clock(int current_state, int current_at_LED_UV)
{
bool has_LED_UV_current_deviated = false;
static int current_number_of_errors_Current_LED_CANNON = 0;
clock_t startTimeError = clock();
const int maximum_operational_current_when_on = 2000;
const int minimum_turned_on_LED_UV_current = 45;
if( (current_at_LED_UV > maximum_operational_current_when_on)
||(current_state!=STATE_EMITTING && (current_at_LED_UV > minimum_turned_on_LED_UV_current))
||(current_state==STATE_EMITTING && (current_at_LED_UV < minimum_turned_on_LED_UV_current)) ){
current_number_of_errors_Current_LED_CANNON++;
if(current_number_of_errors_Current_LED_CANNON > 1) {
if (clock() - startTimeError > 50000){ // 50ms
has_LED_UV_current_deviated = true;
PRINTF("current_at_LED_UV: %d", current_at_LED_UV);
if(current_state==STATE_EMITTING){
PRINTF(" at state emitting");
}
PRINTF("\n\r");
}
}else{
if(startTimeError == -1){
startTimeError = clock();
}
}
}else{
startTimeError = 0;
current_number_of_errors_Current_LED_CANNON = 0;
}
return has_LED_UV_current_deviated;
}
Edit: I forgot to mention before, but we are using GCC 9.3.1 arm-none-eabi compiler with CMake to build the executable file. We have an embedeed system (Colibri IMX7 made by Toradex) that consists in 2 A7 Processors that runs our Linux (more visual interface) and the program that is used to control our device runs in a M4 Processor without an OS, just pure bare-metal.
For a lot of provided functions in the c standard library, if you have the documentation installed (usually it gets installed with the compiler), you can view documentation using the man command in the shell. With man clock, it tells me that:
NAME
clock - determine processor time
SYNOPSIS
#include <time.h>
clock_t clock(void);
DESCRIPTION
The clock() function returns an approximation of processor time used by the program.
RETURN VALUE
The value returned is the CPU time used so far as a clock_t; to get the number of seconds used, divide by
CLOCKS_PER_SEC. If the processor time used is not available or its value cannot be represented, the function
returns the value (clock_t) -1.
etc.
This tells us that -1 means that the processor time (CLOCK_PROCESS_CPUTIME_ID) is unavailable. The solution is to use CLOCK_MONOTONIC instead. We can select the clock we want to use with clock_gettime.
timespec clock_time;
if (clock_gettime(CLOCK_MONOTONIC, &clock_time)) {
printf("CLOCK_MONOTONIC is unavailable!\n");
exit(1);
}
printf("Seconds: %d Nanoseconds: %ld\n", clock_time.tv_sec, clock_time.tv_nsec);
To answer the second part of your question:
static clock_t start_time = clock();
is not allowed because the return value of the function clock() is not known until runtime, but in C the initializer of a static variable must be a compile-time constant.
You can write:
static clock_t start_time = 0;
if (start_time == 0)
{
start_time = clock();
}
But this may or may not be suitable to use in this case, depending on whether zero is a legitimate return value of the function. If it could be, you would need something like:
static bool start_time_initialized = false;
static clock_t start_time;
if (!start_time_initialized)
{
start_time_initialized = true;
start_time = clock();
}
The above is reliable only if you cannot have two copies of this function running at once (it is not re-entrant).
If you have a POSIX library available you could use a pthread_once_t to do the same as the above bool but in a re-entrant way. See man pthread_once for details.
Note that C++ allows more complicated options in this area, but you have asked about C.
Note also that abbreviating "start time" as start_t is a very bad idea, because the suffix _t means "type" and should only be used for type names.
in the end the problem was that since we are running our code on bare metal, the clock() function wasn't working. We ended up using an internal timer on the M4 Processor that we found, so now everything is fine. Thanks for the answers.

Fibonacci using Fork Join in Java 7

This is a program for Fibonacci using Java 7 ForkJoin .
But seems like there is a dead lock.
package ForkJoin;
import java.time.LocalTime;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;
import static java.time.temporal.ChronoUnit.MILLIS;
class Fibonacci extends RecursiveTask<Integer>{
int num;
Fibonacci(int n){
num = n;
}
#Override
protected Integer compute() {
if(num <= 1)
return num;
Fibonacci fib1 = new Fibonacci(num-1);
fib1.fork();
Fibonacci fib2 = new Fibonacci(num-2);
fib2.fork();
return fib1.join()+ fib2.join();
}
}
public class ForkJoinFibonaaciEx {
public static void main(String[] arg){
LocalTime before = LocalTime.now();
int processors = Runtime.getRuntime().availableProcessors();
System.out.println("Available core: "+processors);
ForkJoinPool pool = new ForkJoinPool(processors);
System.out.println("Inside ForkJoin for number 50: "+pool.invoke(new Fibonacci(50)));
LocalTime after = LocalTime.now();
System.out.println("Total time taken: "+MILLIS.between(before, after));
}
}
JVisualVM ---- shows there is dead lock.
Not sure what the real issue is.
Also, I have noticed codes where developers have done fork call for one portion and compute for other half of the problem.
e.g. here in this example they use fib1.fork() and fib2 they don't fork.
You can see the full example
https://github.com/headius/forkjoin.rb/blob/master/examples/recursive/Fibonacci.java
Your help is very much appreciated.
Thank you and have a great
With regards
Deenadayal Manikanta
By adding fib2.fork(); in the compute method, you are creating redundant subtask that has already been calculated before (i.e In next recursive call of fib1.fork()) . Eventually adding redundant sub-task which will take extra time. Instead you can call fib2.compute() which in turn will call fork in the recursion.
Though this is not the actual culprit for time consuming. Real problem is caused by fork.join() operation. As this operation wait for all child sub task (that might be executed by other thread) to finish.
Therefore although there are multiple threads executing at each core providing parallelism, the actual computation at leaf level is negligible compared to join operation.
Bottom line is:
We should use fork-join if below cases are true:
Problem can be solved using Divide and Conquer, creating sub-problem and recursively solve it.
Problem can't be divided upfront and is dynamic.
Also for fork-join to work effectively, we should divide the problem to only a certain level where parallel computation does more good than harm.
Try this:
class ComputeFibonacciTask extends RecursiveTask<Long> {
private int n;
public ComputeFibonacciTask(int n) {
this.n = n;
}
protected Long compute() {
if (n <= 1) {
return Long.valueOf(n);
}
else {
RecursiveTask<Long> otherTask = new ComputeFibonacciTask(n - 1);
otherTask.fork();
return new ComputeFibonacciTask(n - 2).compute() + otherTask.join();
}
}
}

OpenCL possible reason a clGetEventInfo would cause a segfault?

I have a pretty complicated OpenCL app. It fires up 5 different contexts on 5 different GPUs, and executes the same kernel on all of them, splitting up the work into 1024 "chunks" to be processed.
Each time a kernel finishes, a result is checked for, and it's given a new chunk. Sometimes, when running, as the app is starting (very rarely mid-run) it will immediately segfault on the GetEventInfo call.
This is done in a loop using callbacks and clGetEventInfo calls to ensure something is finished before moving on to the next step.
GDB output:
(gdb) back
#0 0x00007fdc686ab525 in clGetEventInfo () from /usr/lib/libOpenCL.so.1
#1 0x00000000004018c1 in ready (event=0x26a00000267) at gputest.c:165
#2 0x0000000000404b5a in main (argc=9, argv=0x7fffdfe3b268) at gputest.c:544
The ready function:
int ready(cl_event event) {
int rdy;
if(!event)
return 0;
clGetEventInfo(event, CL_EVENT_COMMAND_EXECUTION_STATUS, sizeof(cl_int), &rdy, NULL);
if(rdy == CL_COMPLETE)
return 1;
return 0;
}
How the kernel is run, the event set, and checked. Some pseudocode inserted for brevity:
while(test if loop is complete) {
for(j = 0; j < GPUS; j++) {
if(gpu[j].waiting && loops < 9999) {
gpu[j].waiting = 0;
offset[j] = loops * 1024 * 1024;
loops++;
EC("kernel init", clEnqueueNDRangeKernel(queues[j], kernel_init[j], 1, &(offset[j]), &global_work_size, &work128, 0, NULL, &events[j]));
gpu[j].readsearch = events[j];
gpu[j].reading = 1;
}
}
for(j = 0; j < GPUS; j++) {
if(gpu[j].reading && ready(gpu[j].readsearch)) {
gpu[j].reading = 0;
gpu[j].waiting = 1;
// unrelated reporting other code here
}
}
}
Its pretty simple. There is more to the code, but it's unrelated. The ready/checking function is very simple. I even added debugging to the ready function to printf the event # to see what was happening when it crashed - nothing really. No pattern I could see.
What could be causing this?
Ugh. Found the problem. Since you cannot initialize values when you create/declare a struct, I was using some values uninitialized. I malloc'ed the gpu structs then just started using them. With if(gpu[x].reading &&...) being random data and completely uninitialized. So sometimes it was non-zero, which allowed the ready() function to fire off. Since the gpu[x].readsearch event was never set in the first place, clGetEventInfo bombed trying to use whatever was at the memory location.
This would be time number 482,847 that accidentally using uninitialized variables has burned me.

How to make a fast context switch from one process to another?

I need to run unsafe native code on a sandbox process and I need to reduce bottleneck of process switch. Both processes (controller and sandbox) shares two auto-reset events and a coherent view of a mapped file (shared memory) that is used for communication.
To make this article smaller, I removed initializations from sample code, but the events are created by the controller, duplicated using DuplicateHandle, and then sent to sandbox process prior to work.
Controller source:
void inSandbox(HANDLE hNewRequest, HANDLE hAnswer, volatile int *shared) {
int before = *shared;
for (int i = 0; i < 100000; ++i) {
// Notify sandbox of a new request and wait for answer.
SignalObjectAndWait(hNewRequest, hAnswer, INFINITE, FALSE);
}
assert(*shared == before + 100000);
}
void inProcess(volatile int *shared) {
int before = *shared;
for (int i = 0; i < 100000; ++i) {
newRequest(shared);
}
assert(*shared == before + 100000);
}
void newRequest(volatile int *shared) {
// In this test, the request only increments an int.
(*shared)++;
}
Sandbox source:
void sandboxLoop(HANDLE hNewRequest, HANDLE hAnswer, volatile int *shared) {
// Wait for the first request from controller.
assert(WaitForSingleObject(hNewRequest, INFINITE) == WAIT_OBJECT_0);
for(;;) {
// Perform request.
newRequest(shared);
// Notify controller and wait for next request.
SignalObjectAndWait(hAnswer, hNewRequest, INFINITE, FALSE);
}
}
void newRequest(volatile int *shared) {
// In this test, the request only increments an int.
(*shared)++;
}
Measurements:
inSandbox() - 550ms, ~350k context switches, 42% CPU (25% kernel, 17% user).
inProcess() - 20ms, ~2k context switches, 55% CPU (2% kernel, 53% user).
The machine is Windows 7 Pro, Core 2 Duo P9700 with 8gb of memory.
An interesting fact is that sandbox solution uses 42% of CPU vs 55% of in-process solution. Another noteworthy fact is that sandbox solution contains 350k context switches, which is much more than the 200k context switches that we can infer from source code.
I need to know if there's a way to reduce the overhead of transfer control to another process. I already tried to use pipes instead of events, and it was much worse. I also tried to use no event at all, by making the sandbox call SuspendThread(GetCurrentThread()) and making the controller call ResumeThread(hSandboxThread) on every request, but the performance was similar to using events.
If you have a solution that uses assembly (like performing a manual context switch) or Windows Driver Kit, please let me know as well. I don't mind having to install a driver to make this faster.
I heard that Google Native Client does something similar, but I only found this documentation. If you have more information, please let me know.
The first thing to try is raising the priority of the waiting thread. This should reduce the number of extraneous context switches.
Alternatively, since you're on a 2-core system, using spinlocks instead of events would make your code much much faster, at the cost of system performance and power consumption:
void inSandbox(volatile int *lock, volatile int *shared)
{
int i, before = *shared;
for (i = 0; i < 100000; ++i) {
*lock = 1;
while (*lock != 0) { }
}
assert(*shared == before + 100000);
}
void newRequest(volatile int *shared) {
// In this test, the request only increments an int.
(*shared)++;
}
void sandboxLoop(volatile int *lock, volatile int * shared)
{
for(;;) {
while (*lock != 1) { }
newRequest(shared);
*lock = 0;
}
}
In this scenario, you should probably set thread affinity masks and/or lower the priority of the spinning thread so that it doesn't compete with the busy thread for CPU time.
Ideally, you'd use a hybrid approach. When one side is going to be busy for a while, let the other side wait on an event so that other processes can get some CPU time. You could trigger the event a little ahead of time (using the spinlock to retain synchronization) so that the other thread will be ready when you are.

How to put my structure variable into CPU caches to eliminate main memory page access time? Options

It's clear that there is no explicit way or certain system calls that
help programmers to put a variable into the CPU cache.
But I think that a certain programming style or well designed
algorithm can make it possible to increase the possibilities that the
variable can be cached into the CPU caches.
Here is my example:
I want to append an 8 byte structure at the end of an array consisting
of the same type of structures, declared in the global main memory
region.
This process is continuously repeated for 4 million operations. This process takes 6 seconds, 1.5 us for each operation. I think this result tells that the two memory areas have not been cached.
I got some clues from a cache-oblivious algorithm, so I tried several
ways to enhance this. Until now, no enhancement.
I think some clever codes can reduce the elapsed time, up to 10 to 100
times. Please show me the way.
-------------------------------------------------------------------------
Appended (2011-04-01)
Damon~ thank you for your comment!
After reading your comment, I analyzed my code again, and found several things
that I missed. The following code that I attached is the abbreviated version of my original code.
To accurately measure each operation's execution time (in the original code, there are several different types of operations), I inserted the time measuring code using clock_gettime() function. I thought if I measure each operation's execution time and accumulate them, the additional cost by the main loop can be avoided.
In the original code, the time measuring code was hidden by a macro function, so I totally forgot about it.
The running time of this code is almost 6 seconds. But if I get rid of the time measuring function in the main loop, it becomes 0.1 seconds.
Since the clock_gettime() function supports very high precision (upto 1 nano second), executed on the basis of an independent thread, and also it requires very big structure,
I think the function caused the cache-out of the main memory area where the consecutive insertions are performed.
Thank you again for your comment. For further enhancement, any suggestion will be very helpful for me to optimize my code.
I think the hierachically defined structure variable might cause unnecessary time cost,
but first I want to know how much it would be, before I change it to the more C-style code.
typedef struct t_ptr {
uint32 isleaf :1, isNextLeaf :1, ptr :30;
t_ptr(void) {
isleaf = false;
isNextLeaf = false;
ptr = NIL;
}
} PTR;
typedef struct t_key {
uint32 op :1, key :31;
t_key(void) {
op = OP_INS;
key = 0;
}
} KEY;
typedef struct t_key_pair {
KEY key;
PTR ptr;
t_key_pair() {
}
t_key_pair(KEY k, PTR p) {
key = k;
ptr = p;
}
} KeyPair;
typedef struct t_op {
KeyPair keyPair;
uint seq;
t_op() {
seq = 0;
}
} OP;
#define MAX_OP_LEN 4000000
typedef struct t_opq {
OP ops[MAX_OP_LEN];
int freeOffset;
int globalSeq;
bool queueOp(register KeyPair keyPair);
} OpQueue;
bool OpQueue::queueOp(register KeyPair keyPair) {
bool isFull = false;
if (freeOffset == (int) (MAX_OP_LEN - 1)) {
isFull = true;
}
ops[freeOffset].keyPair = keyPair;
ops[freeOffset].seq = globalSeq++;
freeOffset++;
}
OpQueue opQueue;
#include <sys/time.h>
int main() {
struct timespec startTime, endTime, totalTime;
for(int i = 0; i < 4000000; i++) {
clock_gettime(CLOCK_REALTIME, &startTime);
opQueue.queueOp(KeyPair());
clock_gettime(CLOCK_REALTIME, &endTime);
totalTime.tv_sec += (endTime.tv_sec - startTime.tv_sec);
totalTime.tv_nsec += (endTime.tv_nsec - startTime.tv_nsec);
}
printf("\n elapsed time: %ld", totalTime.tv_sec * 1000000LL + totalTime.tv_nsec / 1000L);
}
YOU don't put the structure into any cache. The CPU does that automatically for you. The CPU is even more clever than that; if you access sequential memory, it will start putting things from memory into the cache before you read them.
And really, it should be common sense that for a simple bit of code like this, the time you spend on measuring is ten times more than the time to perform the code (apparently 60 times in your case).
Since you put so much confidence in clock_gettime (): I suggest you call it five times in a row and store the results, then print the differences. There's resolution, there's precision, and there's how long it takes to return the current time, which is pretty damned long.
I have been unable to force caching, but you can force memory to be uncache-able. If you have large other datastructures you might exclude these so that they will not pollute your caches. This can be done by specifying PAGE_NOCACHE for the Windows VirutalAllocXXX functions.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366786(v=vs.85).aspx

Resources