Ada Environment Variable Path Issue - gcc

When compiling Ada, when I change the build path to the GNAT build, all global commands (clear, sudo, gcc, etc.) don't work but when I change it to the global (default) command, the global commands work, but the Ada build isn't recognized.
How do I fix this?
Note: ➜ Ada = $, (using Oh My Zzh from Ada folder)
Terminal: (Note start & end are the same)
➜ Ada gcc -c main.adb
error: invalid value 'ada' in '-x ada'
➜ Ada PATH=/Users/Ryan/opt/GNAT/2020/bin
➜ Ada gcc -c main.adb
xcode-select: error trying to exec 'xcode-select': execvp: No such file or directory
gcc: error trying to exec 'as': execvp: No such file or directory
➜ Ada PATH=/bin:/usr/bin:/usr/local/bin:${PATH}
export PATH
➜ Ada gcc -c main.adb
error: invalid value 'ada' in '-x ada'
➜ Ada

Just do
PATH=/Users/Ryan/opt/GNAT/2020/bin:${PATH}
instead of
PATH=/Users/Ryan/opt/GNAT/2020/bin
You need to preppend path to GNAT, not replace whole PATH.

Related

Running GO GET SDL2 Error executable file not found in $PATH

Im simplly trying to install go-sdl2 on MACOX from https://github.com/veandco/go-sdl2#installation
go get -v github.com/veandco/go-sdl2/{sdl,img,mix,ttf}
I get following ERROR:
github.com/veandco/go-sdl2/sdl
# pkg-config --cflags sdl2
pkg-config: exec: "pkg-config": executable file not found in $PATH

OSX Sierra Tensorflow build error: ld: file not found: #rpath/CUDA.framework/Versions/A/CUDA

I have followed the instruction in:
https://gist.github.com/notilas/a30e29ce514970e821a34153c1e78b3f
But cannot complete it.
OSX: Sierra
Tensorflow version 1.1.0 (Google says v1.2 does not support OSX CUDA)
CUDA Tool kit : 8.0
CUDNN : 6.0
Xcode : 7.2.1
Anaconda : 4.2 (Python version 3.5)
Error Log:
ERROR: /Users/so041e/ml/tensorflow/tensorflow/python/BUILD:2534:1:
Linking of rule '//tensorflow/python:_pywrap_tensorflow_internal.so'
failed: link_dynamic_library.sh failed: error executing command
external/bazel_tools/tools/cpp/link_dynamic_library.sh no ignored
ignored ignored
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc
-shared -o ... (remaining 455 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process
exited with status 1.
clang: warning: argument unused during compilation: '-pthread'
ld: file not found: #rpath/CUDA.framework/Versions/A/CUDA for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
[.bash_profile]
export PATH="/Users/so041e/anaconda/bin:$PATH"
export CUDA_HOME=/usr/local/cuda
export HOME=/Users/so041e
export PATH="$CUDA_HOME/bin:$PATH"
export DYLD_LIBRARY_PATH="/usr/local/cuda/lib:/Developer/NVIDIA/CUDA8.0/lib":$DYLD_LIBRARY_PATH
export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
export PATH=$DYLD_LIBRARY_PATH:$PATH
export PATH="//anaconda/bin:$PATH"
Moved CUDNN lib and include to /user/local/cuda
sudo mv -v cuda/lib/libcudnn* /usr/local/cuda/lib
sudo mv -v cuda/include/cudnn.h /usr/local/cuda/include
Didn't use "vent" Just used single python 3.5 at this moment.
Tried both, but no difference.
bazel build --config=cuda --config=opt --action_env PATH --action_env LD_LIBRARY_PATH --action_env DYLD_LIBRARY_PATH //tensorflow/tools/pip_package:build_pip_package
bazel build --config=cuda //tensorflow/tools/pip_package:build_pip_package
This might be a bit late, but I had this exact same problem and I managed to fix it.
First, #rpath/CUDA.framework/Versions/A/CUDA is a dynamic library install name for libcuda.dylib, which is found in /usr/local/cuda/lib. So do
otool -l /usr/local/cuda/lib/libcuda.dylib
Check where you see #rpath/CUDA.framework/Versions/A/CUDA; on my system it was in the command LC_REEXPORT_DYLIB. From here, it seems dyld doesn't resolve the #rpath for the LC_REEXPORT_DYLIB command, only LC_LOAD*_DYLIB commands. Meaning, it looks for the literal path "#rpath/CUDA.framework/Versions/A/CUDA". So you're going to have to change that by doing
sudo install_name_tool -change #rpath/CUDA.framework/Versions/A/CUDA \
/Library/Frameworks/CUDA.framework/Versions/A/CUDA \
/usr/local/cuda/lib/libcuda.dylib
This should resolve your problem.
Now why your system (and mine) has this install name for libcuda.dylib? I have absolutely no clue.

error: linking with `cc` failed: exit code: 1

I have a single .rs file. When I compile it by rustc test1.rs, I get an error:
error: linking with `cc` failed: exit code: 1
note: cc '-m64' '-L' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib' '-o' 'test1' 'test1.o' '-Wl,-force_load,/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libmorestack.a' '-Wl,-dead_strip' '-nodefaultlibs' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libstd-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libcollections-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libunicode-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/librand-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/liballoc-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/liblibc-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libcore-4e7c5e5c.rlib' '-L' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib' '-L' '/Users/alex/Documents/projects/rust/.rust/lib/x86_64-apple-darwin' '-L' '/Users/alex/Documents/projects/rust/lib/x86_64-apple-darwin' '-lSystem' '-lpthread' '-lc' '-lm' '-lcompiler-rt'
note: ld: warning: directory not found for option '-L/Users/alex/Documents/projects/rust/.rust/lib/x86_64-apple-darwin'
ld: warning: directory not found for option '-L/Users/alex/Documents/projects/rust/lib/x86_64-apple-darwin'
ld: can't open output file for writing: test1, errno=21 for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: aborting due to previous error
$ rustc --version
rustc 1.0.0-dev
I've seen some topic related to this one but none of them helped me to solve the problem.
I was faced with three problems on Mac compiling Rust:
First: If you have any issue with writing files/dirs by ld just remove that files and try to recompile. I don't know why, but on Mac this issue happens time to time.
Second: If you have other ld errors (not about file access): try to add the following sections to your ~/.cargo/config (if you don't have this file feel free to create):
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
[target.aarch64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
Third: Sometimes your Mac lack of some dev tools/dependencies. Install the most important of them automatically with the command:
xcode-select --install
From your command rustc test1.rs the compiler infers the name of the executable should be test1. The linker tries to open this file so it can write the executable but fails with errno=21 whose stringified version is "Is a directory".
This suggests you have a directory in your working directory called test1 which is causing a conflict.
if you have "note: /usr/bin/ld: cannot find -lsqlite3"
then install libsqlite3-dev: $ sudo apt install libsqlite3-dev
This works on Rust 1.53.0, Linux Mint 20.2(based on Ubuntu 20.04 LTS)
If you have a MacBook M1(x) with ARM processor you need to install rust from rustup https://sourabhbajaj.com/mac-setup/Rust/
When you run rustup-init, use the customize option to change aarch64-apple-darwin to x86_64-apple-darwin
Then you can add the following to .cargo/config.toml or .cargo/config (either is fine)
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
This solution was tested with Rust 1.54 and MacBook M1
I was able to do a cargo build --release and generate a dylib file from this tutorial https://www.youtube.com/watch?v=yqLD22sIYMo
My rust project stopped building after updating my MacOS so this command fixed it for me
xcode-select --install
I had the same issue recently and I found out this solution that worked for me:
https://www.docker.com/blog/cross-compiling-rust-code-for-multiple-architectures/
On Running Rust on aarch64 I found out that libc6-dev-arm64-cross is need in order to compile rust successfully on aarch64.

error compiling uClibc (__NR_or1k_atomic undeclared)

I am following http://openrisc.net/toolchain-build.html to build gcc toolchain for openrisc or32.
I'm doing 'building by hand' flow and had passed
binutils
stage 1 gcc
install linux headers
and was to do 'compile uClibc' which is composed of commands below.
$ git clone git://openrisc.net/jonas/uClibc
$ cd uClibc
$ make ARCH=or32 defconfig
$ make PREFIX=${SYSROOT}
$ make PREFIX=${SYSROOT} install <br>
when I run 'make ARCH=or32 defconfig', I get this error.
CC libpthread/linuxthreads.old/attr.o
In file included from libpthread/linuxthreads.old/internals.h:30:0,
from libpthread/linuxthreads.old/attr.c:26:
./libpthread/linuxthreads.old/sysdeps/or32/pt-machine.h: In function 'testandset':
./libpthread/linuxthreads.old/sysdeps/or32/pt-machine.h:41:8: error: '__NR_or1k_atomic' undeclared (first use in this function)
./libpthread/linuxthreads.old/sysdeps/or32/pt-machine.h:41:8: note: each undeclared identifier is reported only once for each function it appears in
In file included from libpthread/linuxthreads.old/../linuxthreads.old_db/proc_service.h:20:0,
from libpthread/linuxthreads.old/../linuxthreads.old_db/thread_dbP.h:9,
from libpthread/linuxthreads.old/internals.h:32,
from libpthread/linuxthreads.old/attr.c:26:
./include/sys/procfs.h: At top level:
./include/sys/procfs.h:32:21: fatal error: asm/elf.h: No such file or directory
compilation terminated.
make: *** [libpthread/linuxthreads.old/attr.o] Error 1
Has anybody had same problem? I use CentOS 6.4.
gcc searches for the header file from the system in the order
/usr/local/include
libdir/gcc/target/version/include (libdir was /usr/lib in my case)
/usr/target/include
/usr/include
my system had sys/syscall.h under /usr/include so that file was used when sys/syscall under uClib/include should have been used. So I added -nostdinc so that gcc doesn't search the standard include path. Now it became
make PREFIX=${SYSROOT} -nostdinc
and it works!
The following command was also modified
make PREFIX=${SYSROOT} -nostdinc install
Cheers!

Install nullfs on Debian

I am using a java program. It automatically creates log files in a directory, but I am doing that myself a different way with tee. I cannot find an easy way to disable the logs, so I am resorting to using nullfs.
I cloned it with
git clone https://github.com/xrgtn/nullfs.git
and I ran
make nul1fs
as instructed. It terminates within a second, with the following output:
cc "-lfuse" nul1fs.c -o nul1fs
nul1fs.c:13:18: fatal error: fuse.h: No such file or directory
compilation terminated.
make: *** [nul1fs] Error 1
I tried apt-get source fuse and copying fuse.h into the nullfs directory, but nothing changed.
I have FUSE installed. I'm running Debian wheezy x86_64.
You need the development package of FUSE, which contains the fuse.h you're missing. Do a apt-get install libfuse-dev and it should work.
Copying the header file in the source directory did not work, because in nul1fs.c you'll notice that fuse.h is included with angle brackets. This means, the header file will be searched in the system-wide include paths. That usually means /usr/include.
Note that you then may run into this error:
$ make nul1fs
cc "-lfuse" nul1fs.c -o nul1fs
/tmp/ccbt0X7c.o: In function `main':
nul1fs.c:(.text+0x3c3): undefined reference to `fuse_main_real'
collect2: error: ld returned 1 exit status
make: *** [nul1fs] Error 1
It's a documented bug with a workaround: put the linker flags after the file lists. I.e. compile nul1fs with:
cc nul1fs.c -o nul1fs -lfuse
and not with make nul1fs, which boils down to
cc -lfuse nul1fs.c -o nul1fs

Resources