What is the difference between "binary install" and "compile and install from source"? Which is better? - installation

I want to install a driver for Ros (robot operating system), and I have two options the binary install and the compile and install from source. I would like to know which installation is better, and what are the advantages and disadvantages of each one.

Source: AKA sourcecode, usually in some sort of tarball or zip file. This is RAW programming language code. You need some sort of compiler (javac for java, gcc for c++, etc.) to create the executable that your computer then runs.
Advantages:
You can see what the source code is which means....
You can edit the end result program to behave differently
Depending on what you're doing, when you compile, you could enable certain optimizations that will work on your machine and ONLY your machine (or one EXACTLY like it). For instance, for some sort of gfx rendering software, you could compile it to enable GPU support, which would increase the rendering speed.
You can create a version of an application for a different OS/Chipset (see Binary below)
Disadvantages:
You have to have your compiler installed
You need to manually install all required libraries, which frequently also need to be compiled (and THEIR libraries need to be installed, etc.) This can easily turn a quick 30-second command into a multi-hour project.
There are any number of things that could go wrong, and if you're not familiar with what the various errors mean, finding support online could be quite difficult.
Binary: This is the actual program that runs. This is the executable that gets created when you compile from source. They typically have all necessary libraries built into them, or install/deploy them as necessary (depending on how the application was written).
Advantages:
It's ready-to-run. If you have a binary designed for your processor and operating system, then chances are you can run the program and everything will work the first time.
Less configuration. You don't have to set up a whole bunch of configuration options to use the program; it just uses a generic default configuration.
If something goes wrong, it should be a little easier to find help online, since the binary is pre-compiled....other people may be using it, which means you are using the EXACT same program as them, not one optimized for your system.
Disadvantages:
You can't see/edit the source code, so you can't get optimizations, or tweak it for your specific application. Additionally, you don't really know what the program is going to do, so there could be nasty surprises waiting for you (this is why Antivirus is useful....although LESS necessary on a linux system).
Your system must be compatible with the Binary. For instance, you can't run a 64-bit application on a 32-bit operating system. You can't run an Intel binary for OS X on an older PowerPC-based G5 Mac.
In summary, which one is "better" is up to you. Only you can decide which one will be necessary for whatever it is you're trying to do. In most cases, using the binary is going to be just fine, and give you the least trouble. Sometimes, though, it is nice to have the source available, if only as documentation.

Related

How to specify the physical CoreIDs used for "CLOSE" when specifying OMP_PROC_BIND?

We are trying to optimize HPC applications using OpenMP on a new hardware platform. These applications need precise placement/pinning of their cores or performance falls in half. Currently, we provide the user a custom GOMP_CPU_AFFINITY map for each platform, but this is cumbersome, because it's different on each hardware version, and even platforms with different firmware versions sometimes change their CoreID physical mappings - all things impossible for the user to detect on the fly.
It would be a great help if HPC applications could simply set GOMP_PROC_BIND to "close" and OpenMP would do the right thing for the given platform - but to make this possible, the hardware vendor would need to define what "close" means for each machine. We'd like to do this, but we can't tell how/where OpenMP gets CoreID lists to use for things like close, spread, etc. (For various external requirements, the CoreID spatial pattern on this machine would appear utterly random to a software writer.)
Any advice as to where/how OpenMP defines the CoreID lists for OMP_PROC_BIND so we could configure them? We are comfortable with the idea that we might need a custom version of OpenMP (with altered source code) for this platform if needed.
Thanks, everyone. :)
Jeff
Expanding on what #VictorEijkhout said...
You seem have invented an envirable that I can't find anywhere with Google (GOMP_PROC_BIND), with the OpenMP standard envirable (OMP_PROC_BIND). If GOMP_PROC_BIND exists the name suggests that it is a GNU feature. Note too that one of the two Google hits for GOMP_PROC_BIND says "Code that reads the setting is buggy. Setting is invalid and ignored at runtime." So, if you are setting that it is unsurprising that it has no effect!
I will therefore answer for the more general case of OMP_PROC_BIND.
The binding of OpenMP threads to logicalCPUs clearly has to be done at runtime, since, beyond its ISA, the compiler has no knowledge of the hardware on which the compiled code will run. Therefore you need to be looking at the runtime library code.
I have not looked at GNU's libgomp, but, where it can, LLVM's libomp uses the hwloc library to explore the machine hardware. Since hwloc also includes other useful tools for machine exploration (such as lstopo) it is likely that your effort is best invested in ensuring good hwloc support on your machine, at which point there will be no need to delve inside the OpenMP runtime.

How to watch which instructions execute in a macOS or Windows binary?

I'm trying to reverse engineer some functionality from a decently large binary ~31MB. I was wondering what the best way to "watch" instructions of an executable are. Specifically, I want to run the executable, turn on "watch", trigger a feature in the executable, and then be able to see which regions of the executable binary were run. Then with those addresses I'd like to look at the disassembled instructions.
I'm aware that a lot of the instructions will probably be UI code as the GUI seems to be written in QT, but hopefully it will help me narrow down which part of the binary I need to focus on.
I would prefer a tool that works under Windows, but macOS would work also, as I have a version of the binary for both of those systems.
I'm aware of time-reversible debuggers, but I'm unsure if those would be possible to use with such a large binary.

gdb, how to step into c runtime? Where is crt_c.c?

When I'm stepping into debugged program, it says that it can't find crt/crt_c.c file. I have sources of gcc 6.3.0 downloaded, but where is crt_c.c in there?
Also how can I find source code for printf and rand in there? I'd like to step through them in debugger.
Ide is codeblocks, if that's important.
Edit: I'm trying to do so because I'm trying to decrease size of my executable. Going straight into freestanding leaves me with a lot of missing functions, so I intend to study and replace them one by one. I'm trying to do that to make my program a little smaller and faster, and to be able to study assembly output a bit easier.
Also, forgot to mention, I'm on windows, msys2. But answer is still helpful.
How can I find source code for printf and rand in there?
They (printf, rand, etc....) are part of your C standard library which (on Linux) is outside of the GCC compiler. But crt0 is provided by GCC (however, is often not compiled with debug information) and some C files there are generated in the build tree during compilation of GCC.
(on Windows, most of the C standard library is proprietary -inside some DLL provided by MicroSoft- and you are probably forbidden to look into the implementation or to reverse-engineer it; AFAIK EU laws might mention some exception related to interoperability¸ but then you need to consult a lawyer and I am not a lawyer)
Look into GNU glibc (or perhaps musl-libc) if you want to study its source code. libc is generally using system calls (listed in syscalls(2)) provided by the Linux kernel.
I'd like to step through them in debugger.
In practice you won't be able to do that easily, because the libc is provided by your distribution and has generally been compiled without debug information in DWARF format.
Some Linux distributions provide a debuggable variant of libc, perhaps as some libc6-dbg package.
(your question lacks motivation and smells like some XY problem)
I intend to study and replace them one by one.
This is very unrealistic (particularly on Windows, whose system call interface is not well documented) and could take you many years (or perhaps more than a lifetime). Do you have that much time?
Read also Operating Systems: Three Easy Pieces and look into OsDev wiki.
I'm trying to do so because I'm trying to decrease size of my executable.
Wrong approach. A debugger needs debug info (e.g. in DWARF) which will increase the size of the executable (but could later be stripped). BTW standard C functions are in some common shared library (or DLL on Windows) which is used by many processes.
I'm on windows, msys2.
Bad choice. Windows is proprietary. Linux is made of free software (more than ten billions lines of source code, if you consider all useful packages inside a typical Linux distribution), whose source code you could study (even if it would take several lifetimes).

Cross-compile on a Linux host for various targets

I have a set of more or less portable C/C++ sources sitting on a Linux development host that I would like to be able to:
compile for 32- and 64-bit Linux targets
cross-compile for 32- and 64-bit Windows targets
cross-compile for 32- and 64-bit Mac targets
and, ideally, without any runtime dependencies on other emulation DLL's like cygwin1.dll, MinGW, etc though I could use them if there's no other choice. If I have to use them, I'd prefer statically linking their functionality to my code.
The target binary that is desired is:
a shared library (.so) for Linux and Mac targets, and
a DLL for Windows.
I have no idea how to build a cross-compiler (and the associated toolchain) from scratch. I'm hearing that pre-built cross-compiler toolchains are available for various host-and-target combinations, but I don't know where to find them, or even how to use them without running into runtime crashes/coredumps later due to pointer model subtleties (LP64, LLP64, etc), specifying wrong or inadequate compiler switches, other misconfiguration, etc.
I've so far been unable to find the relevant and complete information on the above, and whatever little I've managed to find is scattered all over the place in so many bits and pieces that I'm not even sure if all that I've read is complete or even correct (applies fully, no more no less to my case).
I'm not a compilers expert, just their regular user. Would appreciate information achieving the above compilation goals.
I would like to cross compile a library for Mac OsX on Linux and I am considering imcross. The instructions in the site are simple, but everytime you setup a crosscompiling environment you have to fix a lot of things, so I won't expect that it will be straightforward. You can check in the website that there are some limitations to this project but it is the best I came across.
Not being a priority for me now (I have other stuff to do before performing this task) I didn't setup the crossenvironment yet. I am going to do that in few days time.

Set up a development environment on Linux targeting Linux and Windows

For a university course I have to write a http server which is supposed to run on both Linux and Windows.
I have got a humble Linux machine which I don't think can handle any kind of heavy virtual environment, neither I'm willing to go through the hassle of installing it.
This is the first project of mine complex enough (I estimate ~1.5 months to develop) to require an environment sufficiently comfortable to alternate rapidly between short coding and testing sessions (the latter on both platforms, of course).
So, I was wondering what could be the best set up for this situation. I think testing it on Wine would be ok (it is not a real-world thing, after all), and I installed MinGW for the Windows-targeting part.
Basically, a simple well-written makefile could solve my problem... It should build both the Linux and Windows binaries and place them in the respective folders (the Windows one in the Wine sub-tree) and I'm all done! But I feel very inexperienced in this thing and I really don't know where to start. Maybe the make manual, ahah!:)
Thoughts, suggestions, anything I didn't think/know!
Thank you!
(PS. I'm planning to use emacs as editor, or maybe learn vim. Unless eclipse provide some kind of skynet-like plugin that entirely solve this problem...:)
You're on the right track. It's not that complicated, really, thanks to MinGW. You basically need two things:
The code has to be portable across the OSes. MinGW has some POSIX support, but you'll probably need to either use Cygwin in order to be able to use the POSIX interface or have your own compatibility layer for interfacing with the OS. I'd probably go for Cygwin as then you can code only against POSIX and won't have to test and debug your compatibility layer. Also, make sure you won't use any external libraries that are OS specific. Non-portable code often results in a compile error, but make sure you test the application thoroughly anyway.
The toolchains for targeting Linux and Windows. You already have them, you just need to use them correctly. Normally you'd use a variable like $(CROSS_COMPILE) as a prefix when calling the toolchain during cross compilation. So when compiling for Linux, you call gcc, ld, etc. (having the CROSS_COMPILE variable empty), and when compiling for Windows you call e.g. i486-mingw32-gcc, i486-mingw32-ld etc., i.e. CROSS_COMPILE=i486-mingw32-. Or just just define CC, LD etc. depending on the target.
I wrote a small game on Linux and made it run on Windows as well. If you browse the code, you can see the code has next to no #ifdef jungle (basically just some extra debugging features enabled for Linux), and the Makefile is simple as well, with no complicated handling for cross-compilation, just the possibility to override CC etc. like it should be. As lots of important open source software is written this way (especially software that's used by the desktop and embedded devices), you should also be able to find lots of other examples on how to set up the build environment correctly.
As for testing the application on Windows, I think the best option is if you can find a real Windows machine somehow. If you do everything correctly, it should run the same as on Linux and you won't need to continuously test your application on both OSes. If testing on a Windows machine is not possible, a VM would be the next best choice, though it would probably be more difficult to set it up. Wine is a good backup plan, but I don't think you can be sure your application works well on Windows if you only tested it on Wine.

Resources