Test app performance by making it lag - performance

Is there a way or an application to test performance by making the app execute slower? I want to be sure that my app will perform well on older hardware.

Just adding stalls in SW won't necessarily imitate any older HW, it would just show you how the stalled code behaves on the new HW (and if the stalls aren't properly serializing - they may actually get avoided altogether).
If you just want to see how the code behaves without some specific ISA features you can disable them on compilation, or even compile to an older architecture. That won't make your CPU run any slower of course, but it won't be able to use for example AVX/SSE vectors (in x86 for e.g.), or other dedicated instructions.
If you want on old system+OS configuration you can use emulation - for e.g. DosBox
If you want an even higher level of realism, you can find a HW simulator that models that HW, and run on that (assuming you can cross-compile your code to run on it).
And of course, if you want an even more realistic experiment, and willing to go the extra mile, just get a specimen of that old HW, wipe the dust off, and build and run on it :)

Related

How can I benchmark or profile an embedded ARM platform emulated?

I'm developing performance sensitive code for an embedded platform. In general, there are multiple ways to test for an embedded platform, and I'm doing so by developing on a full Linux machine, using the qemu-user arm mode as an emulator. I have full unit tests working, and now want to address performance.
I'd like to profile or benchmark my code. Now, doing so directly in qemu-user is silly, because a fast op may be emulated slowly. But, in principle, qemu could tell me how many clock cycles were emulated to run a function. Even if this doesn't have a full model, or even a partial model, of cache, mem latency, etc., it will still be very useful.
Is there a way I can use qemu to tell me some sense of how code A will perform vs code B? If not, is there another tool? (I recall Intel having some type of model which will tell you how fast given asm will execute.) In general, in the absence of an embedded platform with profiling tools, how can I benchmark and profile my code for ultimate performance?

Does delve support hot swapping of changes or some variant?

Delve is an amazing debugger. Does delve support hot swapping of changes or something similar like the java jvm? It takes me a lot of time to copy my code into docker's build vm, then build all the files, then build & deploy dlv, then copy all the binaries to the runtime docker container. I am looking to speed up my flow. So, I was wondering if hot swap will ever be supported?
Does delve support hot swapping of changes
No. Because Go does not support this, because Go is statically compiled, meaning that the output is a single, autonomous executable file. It's not possible to hot-swap parts of a statically compiled binary.
Fortunately, Go is highly optimized for fast compilation times. When properly configured, even the most complex Go programs can compile in seconds or less, when small changes are made, due to the way unaltered bits can be cached, and require no re-compilation.
This should provide most or all of the benefit (to debugging) that hot-swapping would, without the added complexity.

Debugging an ARM assembly (Neon extension)

I am developing an algorithm that uses ARM Neon instructions. I am writing the code using assembler file (.S and no inline asm).
My question is that what is the best way for debugging purpose i.e. viewing registers, memory, etc.
Currently, I am using Android NDK to compile and my Android phone to run the algorithm.
Poor man's debug solutions...
You can use gdb / gdbserver to remotely control execution of applications on an Android phone. I'm not giving full details here because they change all the time but for example you can start with this answer or make a quick search on Internet. Learning to use GDB might seem to have a high steep curve however material on web is exhaustive. You can easily find something to your taste.
Single-stepping an ARM core via software tools is hard that's why ARM ecosystem is full of expensive tools and extra HW equipment.
Trick I use is to insert BRK instructions manually in assembly code. BRK is Self-hosted debug breakpoint. When core sees this instruction it stops and informs OS about situation. OS then notifies debugger about the situation and passes control to it. When debugger gets control you can check contents of registers and probably even make changes to them. Last part of the operation is to make your process continue. Since PC is still at our break point instruction what you must do is to increase PC, set it to instruction after BRK.
Since you mentioned you use .S files instead of .s files you can utilize gcc to do preprocessing / macro work. This way enabling, disabling BRK might become less of an issue.
Big down side of this way of working is turnaround time. If there is a certain point that you want to investigate with gdb you must make sure there is a BRK instruction there and this will probably require another build/push/debug cycle.

What are the possible side effects of using GCC profiling flag -pg?

There is a device driver for a camera device provided to us as a .so library file by the vendor.
Only the header file with API's is available which provides the list of functions that we can work with the device. Our application is linked with the .so library file provided by the vendor and uses the interface functions provided for our objective.
When we wanted to measure the time taken by our application in handling different tasks, we have added GCC -pg flag and compiled+built our application.
But we found that using this executable built with -pg, we are observing random failure in the camera image acquire functions. Since we are using the .so library file, we do not know what is going wrong inside that function.
So in general I wanted to understand what could be the possible reasons of such a failure mode. Any pointers or documents that can help what goes inside profiling and its side effects is appreciated.
This answer is a helpful overview of how the gcc -pg flag profiler actually works. The take-home point is mostly to do with possible changes to timing. If your library has any kind of time-sensitivity in it, introducing profiler overheads might be changing the time it takes to execute parts of the code, and perhaps violating some kind of constraint.
If you look at the gprof documentation, it would explain the implementation details:
Profiling works by changing how every function in your program is
compiled so that when it is called, it will stash away some
information about where it was called from. From this, the profiler
can figure out what function called it, and can count how many times
it was called. This change is made by the compiler when your program
is compiled with the `-pg' option, which causes every function to call
mcount (or _mcount, or __mcount, depending on the OS and compiler) as
one of its first operations.
So the timing of your application would change quite a bit when you turn on -pg.
If you would like to instrument your code without significantly affecting the timings, you could possibly look at oprofile. It does not pose as significant an overhead as gprof does.
Another fairly recent tool that serves as a good lightweight profiling tool is perf.
The profiling tools are useful primarily in understanding the CPU bound pieces of your library/application and can help you optimize those critical pieces. Most of the time they serve to identify some culprit function/method which wastes CPU cycles. So do not use it as the sole piece for debugging any and all issues.
Most vendor libraries would also provide means to turn on extra debugging or dumping extra information during runtime. They include means such as environment variables, log files, /proc or /sys interfaces for drivers, etc. and sometimes even tools to increase debugging levels at runtime. See if you can leverage these.
If you have defined APIs in a library/driver, you should run unit-tests on them instead of trying to debug the whole application you've built.
If you find a certain unit-test fails, send the source code of the unit-test to your vendor, and ask them to fix the bug. If it is not a bug, your vendor would at least point you towards the right set of APIs or the semantics to use.

OpenCL distribution

I'm currently developing an OpenCL-application for a very heterogeneous set of computers (using JavaCL to be specific). In order to maximize performance I want to use a GPU if it's available otherwise I want to fall back to the CPU and use SIMD-instructions. My plan is to implement the OpenCL-code using vector-types because my understanding is that this allows CPUs to vectorize the instructions and use SIMD-instructions.
My question however is regarding which OpenCL-implementation to use. E.g. if the computer has a Nvidia GPU I assume it's best to use Nvidia's library but if no GPU is available I want to use Intel's library to use the SIMD-instructions.
How do I achieve this? Is this handled automatically or do I have to include all libraries and implement some logic to pick the right one? It feels like this is a problem that more people than I are facing.
Update
After testing the different OpenCL-drivers this is my experience so far:
Intel: crashed the JVM when JavaCL tried to call it. After a restart it didn't crash the JVM but it also didn't return any usable
devices (I was using an Intel I7-CPU). When I compiled the
OpenCL-code offline it seemed to be able to do some
auto-vectorization so Intel's compiler seems quite nice.
Nvidia: Refused to install their WHQL-drivers because it claimed I didn't have Nvidia-card (that computer has a Geforce GT 330M). When
I tried it on a different computer I managed to get all the way to
create a kernel but at the first execution it crashed the drivers
(the screen flickered for a while and Windows 7 said it had to
restart the drivers). The second execution caused a bluee-screen of
death.
AMD/ATI: Refused to install 32-bit SDK (I tried that since I will be using a 32-bit JVM) but 64-bit SDK worked well. This is the only
driver which I've managed to execute the code on (after a restart
because at first it gave a cryptic error-message when compiling).
However it doesn't seem to be able to do any implicit vectorization
and since I don't have any ATI GPU I didn't get any performance
increase compared to the Java-implementation. If I use vector-types I
might see some improvements though.
TL;DR None of the drivers seem ready for commercial use. I'm probably better of creating JNI-module with C-code compiled to use SSE-instructions.
First try to understand hosts & devices: http://www.streamcomputing.eu/blog/2011-07-14/basic-concept-hosts-and-devices/
Basically you can just do exactly what you described: check if a certain driver is available and if not, try the next one. What you choose first depends completely on your own preference. I would pick the device I have tested my kernel best on. In JavaCL you can pick the fastest device with JavaCL.createBestContext and CLPlatform.getBestDevice, check the host-code here: http://ochafik.com/blog/?p=501
Know NVidia does not support CPUs via their driver; only AMD and Intel do. Also is targeting multiple devices (say 2 GPUs and a CPU) a bit more difficult.
There is no API providing what you want. however, you can do the following:
i suggest you iterate over clGetPlatformIDs and query for the number of devices (clGetDeviceIDs), and device type for each device;
and pick the platform which has both types.
then build a map in u'r code, that maps for each type the list of platforms supporting it, ordered in some manner.
finally, just get the first item in the list corresponding for CL_DEVICE_TYPE_CPU and the first item corresponding for CL_DEVICE_TYPE_GPU.
if both returned results are equal (platform_cpu == platform_gpu) then pick one of them and use it for both.
if there is a platform supporting both, you will get match as before since you got order lists. then u can also do load balancing if u like on a single platform, like what Intel has.
Sorry for being late to the party, but regarding Intel's implementation behaviour under JavaCL, I'm afraid you've been bitten by a JavaCL bug :
https://github.com/ochafik/nativelibs4java/issues/297
Fixed in JavaCL 1.0.0-RC2 !
Cheers

Resources