Delve is an amazing debugger. Does delve support hot swapping of changes or something similar like the java jvm? It takes me a lot of time to copy my code into docker's build vm, then build all the files, then build & deploy dlv, then copy all the binaries to the runtime docker container. I am looking to speed up my flow. So, I was wondering if hot swap will ever be supported?
Does delve support hot swapping of changes
No. Because Go does not support this, because Go is statically compiled, meaning that the output is a single, autonomous executable file. It's not possible to hot-swap parts of a statically compiled binary.
Fortunately, Go is highly optimized for fast compilation times. When properly configured, even the most complex Go programs can compile in seconds or less, when small changes are made, due to the way unaltered bits can be cached, and require no re-compilation.
This should provide most or all of the benefit (to debugging) that hot-swapping would, without the added complexity.
Related
I want to install a driver for Ros (robot operating system), and I have two options the binary install and the compile and install from source. I would like to know which installation is better, and what are the advantages and disadvantages of each one.
Source: AKA sourcecode, usually in some sort of tarball or zip file. This is RAW programming language code. You need some sort of compiler (javac for java, gcc for c++, etc.) to create the executable that your computer then runs.
Advantages:
You can see what the source code is which means....
You can edit the end result program to behave differently
Depending on what you're doing, when you compile, you could enable certain optimizations that will work on your machine and ONLY your machine (or one EXACTLY like it). For instance, for some sort of gfx rendering software, you could compile it to enable GPU support, which would increase the rendering speed.
You can create a version of an application for a different OS/Chipset (see Binary below)
Disadvantages:
You have to have your compiler installed
You need to manually install all required libraries, which frequently also need to be compiled (and THEIR libraries need to be installed, etc.) This can easily turn a quick 30-second command into a multi-hour project.
There are any number of things that could go wrong, and if you're not familiar with what the various errors mean, finding support online could be quite difficult.
Binary: This is the actual program that runs. This is the executable that gets created when you compile from source. They typically have all necessary libraries built into them, or install/deploy them as necessary (depending on how the application was written).
Advantages:
It's ready-to-run. If you have a binary designed for your processor and operating system, then chances are you can run the program and everything will work the first time.
Less configuration. You don't have to set up a whole bunch of configuration options to use the program; it just uses a generic default configuration.
If something goes wrong, it should be a little easier to find help online, since the binary is pre-compiled....other people may be using it, which means you are using the EXACT same program as them, not one optimized for your system.
Disadvantages:
You can't see/edit the source code, so you can't get optimizations, or tweak it for your specific application. Additionally, you don't really know what the program is going to do, so there could be nasty surprises waiting for you (this is why Antivirus is useful....although LESS necessary on a linux system).
Your system must be compatible with the Binary. For instance, you can't run a 64-bit application on a 32-bit operating system. You can't run an Intel binary for OS X on an older PowerPC-based G5 Mac.
In summary, which one is "better" is up to you. Only you can decide which one will be necessary for whatever it is you're trying to do. In most cases, using the binary is going to be just fine, and give you the least trouble. Sometimes, though, it is nice to have the source available, if only as documentation.
I have a program that takes a lot of memory and time to compile. I measured that without debugging symbols, compilation takes much less resources, but I would like to always have them, even for "release" builds so that I crash dumps are meaningful.
Is it possible to create debugging symbols (-ggdb3) with either gcc or clang for an executable that has not been originally compiled with them? I've been told that just recompiling the program with -ggdb3 works, but I don't know how much this is reliable.
Assuming the build chain is deterministic, which is a highly desirable goal for tool chains, and assuming you have not changed the source in any meaningful way (which practically means in any way at all), then running it again will be reliable. However, I am sure it is possible to demonstrate examples when this doesn't go as planned. So, as your intuition suggests, building the debugging symbols simultaneously should be considered a Good Thing.
Is there a way or an application to test performance by making the app execute slower? I want to be sure that my app will perform well on older hardware.
Just adding stalls in SW won't necessarily imitate any older HW, it would just show you how the stalled code behaves on the new HW (and if the stalls aren't properly serializing - they may actually get avoided altogether).
If you just want to see how the code behaves without some specific ISA features you can disable them on compilation, or even compile to an older architecture. That won't make your CPU run any slower of course, but it won't be able to use for example AVX/SSE vectors (in x86 for e.g.), or other dedicated instructions.
If you want on old system+OS configuration you can use emulation - for e.g. DosBox
If you want an even higher level of realism, you can find a HW simulator that models that HW, and run on that (assuming you can cross-compile your code to run on it).
And of course, if you want an even more realistic experiment, and willing to go the extra mile, just get a specimen of that old HW, wipe the dust off, and build and run on it :)
Why does making a C/C++ app take very long compared to other apps (Java for example).
I am trying to build Ubuntu Unity, and it takes about 4 minutes on my local machine.
I think the process of Generating object files is the one that take most time.
Any advice?
If you want to speed up code generation you can use ccache. Also you can take a look at gcc version as older versions are known to lag behind. Clang also supersede them a lot.
I'm not touching compilation speed bacause this is a HUGE topic. Starting from that C/C++ is a fully compilable languages, while in Java you never compile to the machine codes, you just generate a bytecode leaving everything else to the VM.
There is a device driver for a camera device provided to us as a .so library file by the vendor.
Only the header file with API's is available which provides the list of functions that we can work with the device. Our application is linked with the .so library file provided by the vendor and uses the interface functions provided for our objective.
When we wanted to measure the time taken by our application in handling different tasks, we have added GCC -pg flag and compiled+built our application.
But we found that using this executable built with -pg, we are observing random failure in the camera image acquire functions. Since we are using the .so library file, we do not know what is going wrong inside that function.
So in general I wanted to understand what could be the possible reasons of such a failure mode. Any pointers or documents that can help what goes inside profiling and its side effects is appreciated.
This answer is a helpful overview of how the gcc -pg flag profiler actually works. The take-home point is mostly to do with possible changes to timing. If your library has any kind of time-sensitivity in it, introducing profiler overheads might be changing the time it takes to execute parts of the code, and perhaps violating some kind of constraint.
If you look at the gprof documentation, it would explain the implementation details:
Profiling works by changing how every function in your program is
compiled so that when it is called, it will stash away some
information about where it was called from. From this, the profiler
can figure out what function called it, and can count how many times
it was called. This change is made by the compiler when your program
is compiled with the `-pg' option, which causes every function to call
mcount (or _mcount, or __mcount, depending on the OS and compiler) as
one of its first operations.
So the timing of your application would change quite a bit when you turn on -pg.
If you would like to instrument your code without significantly affecting the timings, you could possibly look at oprofile. It does not pose as significant an overhead as gprof does.
Another fairly recent tool that serves as a good lightweight profiling tool is perf.
The profiling tools are useful primarily in understanding the CPU bound pieces of your library/application and can help you optimize those critical pieces. Most of the time they serve to identify some culprit function/method which wastes CPU cycles. So do not use it as the sole piece for debugging any and all issues.
Most vendor libraries would also provide means to turn on extra debugging or dumping extra information during runtime. They include means such as environment variables, log files, /proc or /sys interfaces for drivers, etc. and sometimes even tools to increase debugging levels at runtime. See if you can leverage these.
If you have defined APIs in a library/driver, you should run unit-tests on them instead of trying to debug the whole application you've built.
If you find a certain unit-test fails, send the source code of the unit-test to your vendor, and ask them to fix the bug. If it is not a bug, your vendor would at least point you towards the right set of APIs or the semantics to use.