Xeon Phi cannot execute binary file - bash

I am trying to execute a binary file on a xeon phi coprocessor, and it is coming back with "bash: cannot execute binary file". So I am trying to find how to either view an error log or have it display what's happening when I tell it to execute that is causing it not work. I have already tried bash --verbose but it didn't display any additional information. Any ideas?

You don't specify where you compiled your executable nor where you tried to execute from.
To compile a program on the host system to be executed directly on the coprocessor, you must either:
if using one of the Intel compilers, add -mmic to the compiler
command line
if using gcc, use the cross-compilers provided with the MPSS
(/usr/linux-k1om-4.7) - note, however, that the gcc compiler does not
take advantage of vectorization on the coprocessor
If you want to compile directly on the coprocessor, you can install the necessary files from the additional rpm files provided for the coprocessor (found in mpss-/k1om) using the directions from the MPSS user's guide for installing additional rpm files.
To run a program on the coprocessor, if you have compiled it on the host, you must either:
copy your executable file and required libraries to the coprocessor
using scp before you ssh to the coprocessor yourself to execute the
code.
use the micnativeloadex command on the host - you can find a man page
for that on the host.
If you are writing a program using the offload model (part of the work is done using the host then some of the work is passed off to the coprocessor), you can compile on the host using the Intel compilers with no special options.
Note, however, that, regardless of what method you use, any libraries to be used with an executable for the coprocessor will need themselves to be built for the coprocessor. The default libraries exist but any libraries you add, you need to build a version for the coprocessor in addition to any version you make for the host system.
I highly recommend the articles you will find under https://software.intel.com/en-us/articles/programming-and-compiling-for-intel-many-integrated-core-architecture. These articles are written by people who develop and/or support the various programming tools for the coprocessor and should answer most of your questions.

Update: What's below does NOT answer the OP's question - it is one possible explanation for the cannot execute binary file error, but the fact that the error message is prefixed with bash: indicates that the binary is being invoked correctly (by bash), but is not compatible with the executing platform (compiled for a different architecture) - as #Barmar has already stated in a comment.
Thus, while the following contains some (hopefully still somewhat useful) general information, it does not address the OP's problem.
One possible reason for cannot execute binary file is to mistakenly pass a binary (executable) file -- rather than a shell script (text file containing shell code) -- as an operand (filename argument) to bash.
The following demonstrates the problem:
bash printf # fails with '/usr/bin/printf: /usr/bin/printf: cannot execute binary file'
Note how the mistakenly passed binary's path prefixes the error message twice; If the first prefix says bash: instead, the cause is most likely not a problem of incorrect invocation, but one of trying to a invoke an incompatible binary (compiled for a different architecture).
If you want bash to invoke a binary, you must use the -c option to pass it, which allows you to specify an entire command line; i.e., the binary plus arguments; e.g.:
bash -c '/usr/bin/printf "%s\n" "hello"' # -> 'hello'
If you pass a mere binary filename instead of a full path - e.g., -c 'program ...' - then a binary by that name must exist in one of the directories listed in the $PATH variable that bash sees, otherwise you'll get a command not found error.
If, by contrast, the binary is located in the current directory, you must prefix the filename with ./ for bash to find it; e.g. -c './program ...'

Related

Pipe and Redirect Bashcommands doesn't work in cmake [duplicate]

What I want to achieve
I try to set up a toolchain to compile OpenCL applications for Intel FPGAs. Therefore beneath building the C++ based host application I need to invoke the Intel OpenCL offline compiler for OpenCL kernels.
This step should only take place if the cl source file was edited or the resulting binaries are missing. My approach is to add a custom command to invoke the CL compiler and create a custom target that depends on the output generated by this command. The offline Open CL compiler is called aoc and due to the possibility of multiple SDK-Versions present on the system I invoke it with an absolute path that is stored in aocExecutable. This is the relevant part of my CMakeLists.txt
set (CLKernelName "vector_add")
set (CLKernelSourceFile "${PROJECT_SOURCE_DIR}/${CLKernelName}.cl")
set (CLKernelBinary "${PROJECT_BINARY_DIR}/${CLKernelName}.aocx")
add_executable (HostApplication main.cpp)
# ------ a lot of unneccessary details here ------
add_custom_command (OUTPUT "${CLKernelBinary}"
COMMAND "${aocExecutable} -march=emulator ${CLKernelSourceFile} -o ${CLKernelBinary}"
DEPENDS "${CLKernelSourceFile}"
)
add_custom_target (CompileCLSources DEPENDS "${CLKernelBinary}")
add_dependencies (HostApplication CompileCLSources)
What doesn't work
Running this in the CLion IDE under Linux leads to this error:
/bin/sh: 1: /home/me/SDKsAndFrameworks/intelFPGA/18.1/hld/bin/aoc -march=emulator /home/me/CLionProjects/cltest/vector_add.cl -o /home/me/CLionProjects/cltest/cmake-build-debug-openclintelfpgasimulation/vector_add.aocx: not found
The whole command expands correctly, copying it and pasting it into a terminal works without problems, so I'm not sure what the not found error means.
Further Question
Assumed the above problem will be solved, how can I achieve that the custom command is not only invoked if the output file is not present in the build directory but also if the CL source file was edited?
As you can see in the error message, the bash interprets the whole command line
/home/me/SDKsAndFrameworks/intelFPGA/18.1/hld/bin/aoc -march=emulator /home/me/CLionProjects/cltest/vector_add.cl -o /home/me/CLionProjects/cltest/cmake-build-debug-openclintelfpgasimulation/vector_add.aocx
as a single executable.
This is because you wrap COMMAND in your script with double quotes.
Remove these double quotes, so everything will work.
As in many other scripting languages, in CMake double quotes makes the quoted string to be interpreted as a single argument for a function or for a macro.
But in add_custom_command/add_custom_target functions a keyword COMMAND starts a list of arguments, first of which denotes an executable and others - separated parameters for that executable.

What is the `patchShebangs` command in Nix build expressions?

Came across the patchShebangs command while looking at packages in the Nixpkgs repo, and saw it used in various phases of the standard environment's generic builder, but not sure what it is for or why it is needed in the first place.
In short: shell scripts used during a Nix build won't work out of the box because Nix clears the environment, and so the interpreter directive (shebang), on the first line of the script determining the program to use to evaluate the script body, will not find it. patchShebangs looks up the interpreter in the Nix store, and amends the script shebang.
0. Introduction
patchShebangs is indirectly mentioned in the Nixpkgs manual when describing the phases of the generic builder of the Nixpkgs standard environment, stating that the fixup phase at one point
rewrites the interpreter paths of shell scripts to paths found in PATH. E.g., /usr/bin/perl will be rewritten to /nix/store/some-perl/bin/perl found in PATH.
->
It is important to note that (paraphrasing #jonringer's comment), "the patchShebangs command is only available during the build if you source the $stdenv/setup setup hook" (more on that below) "provided by stdenv's (the Nixpkgs standard environment's) default builder (you get this by default when using stdenv.mkDerivation), which is why the starting point of almost all nix expressions is import <nixpkgs> {}, stdenv.mkDerivation, or something similar."
1. Where is patchShebangs defined
The file patch-shebangs.sh in the Nixpkgs repo (also documented at 6.7.4. patch-shebangs.sh) defines the patchShebangs function, which in turn is used to implement patchShebangsAuto, the setup hook that is registered to run during the fixup phase.
2. Why are shebang rewrites needed when building Nix packages?
According to the comment at the top of patch-shebangs.sh:
# This setup hook causes the fixup phase to rewrite all script
# interpreter file names (`#! /path') to paths found in $PATH. E.g.,
# /bin/sh will be rewritten to /nix/store/<hash>-some-bash/bin/sh.
# /usr/bin/env gets special treatment so that ".../bin/env python" is
# rewritten to /nix/store/<hash>/bin/python. Interpreters that are
# already in the store are left untouched.
# A script file must be marked as executable, otherwise it will not be
# considered.
IMPORTANT NOTE: The criterion above that the "script file must be marked as executable, otherwise it will not be considered" is an important one.
The line in a shell script starting with #! is called shebang (among others), and it is an interpreter directive to the executing shell as for what program to use to decipher the text below; the characters after #! has to consitute an absolute path that points to this executable. For example, #!/usr/bin/python3 will expect to find the python3 program there to carry out the commands in the body of the shell script written in the Python programming language.
Using shell scripts during package build phases becomes problematic though because
When Nix runs a builder, it initially completely clears the environment (except for the attributes declared in the derivation). For instance, the PATH variable is empty. This is done to prevent undeclared inputs from being used in the build process. If for example the PATH contained /usr/bin, then you might accidentally use /usr/bin/gcc.
->
The quote above is from the Nix manual but the builder, that is shown there as an example, uses $stdenv/setup - a shell script that sets up a pristine sandbox environment for the build process, unsetting most (all?) environment variables from the calling shell, and only including a small number of utilities. (This is done to make builds reproducible, as much as possible.)1
$stdenv/setup is usually called implicitly when using stdenv.mkDerivation with the generic builder (i.e., when the builder attribute is left undeclared) but one can write their own builders and invoke it explicitly during the build process.
TIP: This answer shows one way to find where a certain Nix function is defined (although it is not infallible).
As a corollary, the programs pointed to by the shebang directives won't be at those locations (or unavailable to reach from the sandbox), but they are actually around (or will be) in the Nix store so the paths will need to be re-pointed to their location in there.
NOTE: The generic builder populates PATH from inputs of the derivation so one must make sure that these are included as a dependency.
3. How to use
3.1 Implicitly
As mentioned above,patchShebangs is automatically invoked by the patchShebangsAuto setup hook during the fixup phase whenever a package is built - unless one opts out of this by setting the dontPatchShebangs variable (or the dontFixup variable for that matter) (see Variables controlling the fixup phase in the Nixpkgs manual).
Reminder to self: 6.4 Bash Conditional Expressions.
3.1.0 What scripts is patchShebangs used on when invoked automatically?
Usually on scripts installed by packages (for example to $out/bin).
Or the ones provided default by the Nixpkgs standard library? I presume that these have to be generic enough to run on different platforms so that (1) the template is built, and (2) scripts shebangs are patched in the end. (#jtojnar confirmed this conjecture, but this section needs references, hence the small case.)
3.1.1 How to use the variables controlling a build phase?
Pass it to mkDerivation like any other variable controlling the builder.
stdenv.mkDerivation {
#...
dontPatchShebangs = true;
#...
}
3.2 Explicitly
Historical note: Originally, patchShebangs was not externally callable, but it was later extracted to make its functionality re-usable in other build phases as well.
Again, from the comments in the implementation:
# Run patch shebangs on a directory or file.
# Can take multiple paths as arguments.
# patchShebangs [--build | --host] PATH...
# Flags:
# --build : Lookup commands available at build-time
# --host : Lookup commands available at runtime
# Example use cases,
# $ patchShebangs --host /nix/store/...-hello-1.0/bin
# $ patchShebangs --build configure
It needs to be run on scripts that are to be executed directly (shell scripts included) during build time. These may be
coming from the source of what is being packaged
written by one to be used as helpers during the build process2
Specific examples from around the web:
In Nix, how can I build a package that has a Python post-install script? (Unix & Linux Stackexchange)
hard-coded bin path and NixOS (Stackoverflow)
[QUESTION] Alias and symlinks in NixOS derivations (Reddit)
This systemd-specific issue on IRC
... and quoting #jtojnar:
That is exactly the use case for the explicit patchShebangs call. Meson build system expects to run src/shared/generate-syscall-list.py so it calls it. But that fails because /usr/bin/env does not exist in the build sandbox. And it only gets confusing because kernel/libc/something else reports that the script does not exist, even though it was the interpreter from the shebang which does not exist.
Footnotes
[1]: TODO: Find out more about how the sandbox(es) are built exactly and what are barred and what are allowed. Quoting #jtojnar to bring one example:
/usr/bin/env, which is not available in sandbox either. (NixOS only has that in user space for convenience but that does not carry over to Nix sandbox..
[2]: #jtojnar's comment: "Right, you will not need to use it explicitly for scripts that are only executed at run time, since those will be handled by the implicit call."
All links in this thread have (hopefully) been saved to the Internet Archive. (The soundtrack of the thread is this gem.)

PVS-Studio: No compilation units were found

I'm using PVS-Studio in docker image based on ubuntu:18.04 for cross-compiling a couple of files with arm-none-eabi-gcc. After doing pvs-studio-analyzer trace -- .test/compile_with_gcc.sh strace_out file is successfully created, it's not empty and contains calls to arm-none-eabi-gcc.
However pvs-studio-analyzer analyze complains that "No compilation units were found". I tried using --compiler arm-none-eabi-gcc key with no success.
Any ideas?
The problem was in my approach to compilation. Instead of using a proper build system, I used a wacky shell script (surely, I thought, using a build system for 3 files is an overkill, shell script won't hurt anybody). And in that script I used grep to redefine one constant in the source - kinda like that: grep -v -i "#define[[:blank:]]\+${define_name}[[:blank:]]" ${project}/src/main/main.c | ~/opt/gcc-arm-none-eabi-8-2018-q4-major/bin/arm-none-eabi-gcc -o main.o -xc
So compiler didn't actually compiled a proper file, it compiled output of grep. So naturally, PVS-Studio wasn't able to analyze it.
TL;DR: Don't use shell scripts as build system.
We have reviewed the stace_out file. It can be handled correctly by the analyzer, if the source files and compilers are located by the absolute path in the stace_out file. We have a suggestion what might help you. You can "wrap" the build command in a call to pvs-studio-analyzer -- trace and pvs-studio-analyzer analyze and place them inside your script (compile_with_gcc.sh). Thus, the script should start with the command:
pvs-studio-analyzer trace --
and end with the command:
pvs-studio-analyzer analyze
This way we will make sure that the build and analysis were started at the same container run. If the proposed method does not help, please describe in more detail, by commands, the process of building the project and running the analyzer. Also tell us whether the container reruns between the build and the formation of strace_out, and the analysis itself.
It would also help us a lot if you ran the pvs-studio-analyzer command with the optional --dump-log flag and provided it to us. An example of a command that can be used to do this:
pvs-studio-analyzer analyze --dump-log ex.log
Also, it seems that it is not possible to quickly solve the problem and it is probably more convenient to continue the conversation via the feedback form on the product website.

Running "<" command between two different directories

I'm working a small JS project and trying to get a script to run, which compiles some source files that are written in our own "language x".
To run the compiler normally you would use the command ./a.out < source.x And it would print out success or compilation errors etc.
In the case now, I'm trying to working between two directories and using this command:
sudo ~/Documents/server/xCompiler/./a.out < ~/Documents/server/xPrograms/source.x
But this produces no output into the terminal at all and doesn't affect the output files. Is there somthing I'm doing wrong with the use of <? I'm planning to use it in child_process.exec within a node server later.
Any help would be appreciated, I'm a bit stumped.
Thanks.
Redirection operators (<, >, and others like them) describe operations to be performed by the shell before your command is run at all. Because these operations are performed by the shell itself, it's extremely unlikely that they would be broken in a way specific to an individual command: When they're performed, the command hasn't started yet.
There are, however, some more pertinent ways your first and second commands differ:
The second (non-working) one uses a fully-qualified path to the compiler itself. That means that the directory that the compiler is found in and the current working directory where the compiler is running can differ. If the compiler looks for files in or in locations relative to its current working directory, this can cause a failure.
The second uses sudo to escalate privileges to run the compiler. This means you're running as a different user, with most environment variables cleared or modified (unless explicitly whitelisted in /etc/sudoers) during the switch -- and has widespread potential to break things depending on details of your compiler's expectations about its runtime environment beyond what we can reasonably be expected to diagnose here.
That first one, at least, is amenable to a solution. In shell:
xCompile() {
(cd ~/Documents/server/xCompiler && exec ./a.out "$#")
}
xCompile < ~/Documents/server/xPrograms/source.x
Using exec is a performance optimization: It balances the cost of creating a new subshell (with the parenthesis) by consuming that subshell to launch the compiler rather than launching it as a subprocess.
Calling the node child_process.exec(), you can simply pass the desired runtime directory in the cwd argument, so no shell function is necessary.

How can I make bash execute an ELF binary from stdin?

For some obscure reason I have written a bash script which generates some source code, then compiles it, using
... whatever ... | gcc -x c -o /dev/stdout
Now, I want to execute the result on the compilation. How can I make that happen? No use of files please.
As Charles Duffy said, to execute a binary, you'd have to tell your operating system (which seems to be a Unix variant) to load and execute something – and Unix systems only take files to execute them directly.
What you could do is have a process that prepares a memory region containing the ELF binary, fork and jump into that region - but even that is questionable, considering that there's CPU support to suppress exactly that operation (R^X). Basically, what you need is a runtime linker, and shells do not (and also: should not) include something like that.
Let's drop the Bash requirement (which really just sounds like you're trying to find an obvious hole in an application that is older than I am grumpy):
Generally, requiring ELF (which is a file format) and avoiding files at the same time is a tad complicated. GCC generates machine code. If you just want to execute known machine code, put it into some buffer, build a function pointer to that and call it. Simple as that. However, you'd obviously don't have all the nice relocation and dynamic linking that the process of executing an ELF binary or loading a shared object (dlopen) would have.
If you want that, I'd look in the direction of things like LLVM – I know, for a fact, that there's people building "I compile C++ at runtime and execute it" with LLVM as executing instance, and clang as compiler. In the end, what your gcc|something is is really just JIT – an old technology :)
If your goal is to not write to the filesystem at all, then neither bash nor any other UNIX program will be able to help you execute an ELF from a pipe - execve only takes a path to a regular file as its filename and will fail (setting errno to EACCES) if you pass it a special file (device or named pipe) or a directory.
However, if your goal is to keep the executable entirely in RAM and not touch the hard disk (perhaps because the disk is read-only) you can do something with the same effect on your machine by using tmpfs, which comes with many UNIX-like systems (and is used in Linux to implement semaphores) and allows you to create a full-permissions filesystem that resides entirely in RAM:
$ sudo mount -t tmpfs -o size=10M tmpfs /mnt/mytmpfs
You can then write your binary to that:
... whatever ... | gcc -x c -o /mnt/mytmpfs/program.out
/mnt/mytmpfs/program.out
and bash will load it for you as if it was on disk.
Note, however, that you do still need enough RAM onboard the device to store and execute the program - though due to the nature of most executable binaries, you would need that anyway.
If you don't want to leave the program behind on your ramdisk (or normal disk, if that is acceptable) for others to find, you can also delete the file immediately after starting to execute it:
/mnt/mytmpfs/program.out &
rm /mnt/mytmpfs/program.out
The name will disappear immediately, but the process will internally hold a hard link to that file, then release that hard link when it terminates, allowing the file to be immediately deleted from disk. (However, the storage won't actually be freed until the program exits, and the program will not be able to exec itself either).

Resources