How do I force dub to install a newer version of a package? - dub

I keep trying to use dub run with a newer version, but it doesn't work, it just rebuilds the old version.
$ dub run dpp#0.3.4
Fetching dpp 0.3.4...
Building package dpp in /Users/james/.dub/packages/dpp-0.3.1/dpp/
Performing "debug" build using /Library/D/dmd/bin/dmd for x86_64.
libclang 0.1.8: building configuration "library"...
sumtype 0.7.1: building configuration "library"...
dpp 0.3.1: building configuration "executable"...
Linking...
ld: warning: directory not found for option '-L/usr/lib/llvm-6.0/lib'
ld: warning: directory not found for option '-L/usr/lib/llvm-3.9/lib'
Running ../../../../.dub/packages/dpp-0.3.1/dpp/bin/d++
Error: No .dpp input file specified
Usage: d++ [options] [D compiler options] <filename.dpp> [D compiler args]
Program exited with code 1
dub cache-clean doesn't fix the problem either.

Delete the package with rm -rf:
rm -rf ../../../../.dub/packages/dpp-0.3.1/
Then re-run:
$ dub run dpp#0.3.4 --force

Related

OSX Sierra Tensorflow build error: ld: file not found: #rpath/CUDA.framework/Versions/A/CUDA

I have followed the instruction in:
https://gist.github.com/notilas/a30e29ce514970e821a34153c1e78b3f
But cannot complete it.
OSX: Sierra
Tensorflow version 1.1.0 (Google says v1.2 does not support OSX CUDA)
CUDA Tool kit : 8.0
CUDNN : 6.0
Xcode : 7.2.1
Anaconda : 4.2 (Python version 3.5)
Error Log:
ERROR: /Users/so041e/ml/tensorflow/tensorflow/python/BUILD:2534:1:
Linking of rule '//tensorflow/python:_pywrap_tensorflow_internal.so'
failed: link_dynamic_library.sh failed: error executing command
external/bazel_tools/tools/cpp/link_dynamic_library.sh no ignored
ignored ignored
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc
-shared -o ... (remaining 455 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process
exited with status 1.
clang: warning: argument unused during compilation: '-pthread'
ld: file not found: #rpath/CUDA.framework/Versions/A/CUDA for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
[.bash_profile]
export PATH="/Users/so041e/anaconda/bin:$PATH"
export CUDA_HOME=/usr/local/cuda
export HOME=/Users/so041e
export PATH="$CUDA_HOME/bin:$PATH"
export DYLD_LIBRARY_PATH="/usr/local/cuda/lib:/Developer/NVIDIA/CUDA8.0/lib":$DYLD_LIBRARY_PATH
export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
export PATH=$DYLD_LIBRARY_PATH:$PATH
export PATH="//anaconda/bin:$PATH"
Moved CUDNN lib and include to /user/local/cuda
sudo mv -v cuda/lib/libcudnn* /usr/local/cuda/lib
sudo mv -v cuda/include/cudnn.h /usr/local/cuda/include
Didn't use "vent" Just used single python 3.5 at this moment.
Tried both, but no difference.
bazel build --config=cuda --config=opt --action_env PATH --action_env LD_LIBRARY_PATH --action_env DYLD_LIBRARY_PATH //tensorflow/tools/pip_package:build_pip_package
bazel build --config=cuda //tensorflow/tools/pip_package:build_pip_package
This might be a bit late, but I had this exact same problem and I managed to fix it.
First, #rpath/CUDA.framework/Versions/A/CUDA is a dynamic library install name for libcuda.dylib, which is found in /usr/local/cuda/lib. So do
otool -l /usr/local/cuda/lib/libcuda.dylib
Check where you see #rpath/CUDA.framework/Versions/A/CUDA; on my system it was in the command LC_REEXPORT_DYLIB. From here, it seems dyld doesn't resolve the #rpath for the LC_REEXPORT_DYLIB command, only LC_LOAD*_DYLIB commands. Meaning, it looks for the literal path "#rpath/CUDA.framework/Versions/A/CUDA". So you're going to have to change that by doing
sudo install_name_tool -change #rpath/CUDA.framework/Versions/A/CUDA \
/Library/Frameworks/CUDA.framework/Versions/A/CUDA \
/usr/local/cuda/lib/libcuda.dylib
This should resolve your problem.
Now why your system (and mine) has this install name for libcuda.dylib? I have absolutely no clue.

Clion and CMake fail after updating to xcode 8

I get the following error after updating to xcode 8 and I'm not sure how to fix.
Error:The C compiler "/usr/bin/cc" is not able to compile a simple test program.
It fails with the following output:
Change Dir: /Users/username/Library/Caches/CLion2016.2/cmake/generated/CacheBack-27c25a9c/27c25a9c/__default__/CMakeFiles/CMakeTmp
Run Build Command:"/usr/bin/make" "cmTC_e91e5/fast"
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/cmTC_e91e5.dir/build.make CMakeFiles/cmTC_e91e5.dir/build
Building C object CMakeFiles/cmTC_e91e5.dir/testCCompiler.c.o
/usr/bin/cc -o CMakeFiles/cmTC_e91e5.dir/testCCompiler.c.o -c /Users/username/Library/Caches/CLion2016.2/cmake/generated/CacheBack-27c25a9c/27c25a9c/__default__/CMakeFiles/CMakeTmp/testCCompiler.c
cc: error: unable to find utility "clang", not a developer tool or in PATH
make[1]: *** [CMakeFiles/cmTC_e91e5.dir/testCCompiler.c.o] Error 72
make: *** [cmTC_e91e5/fast] Error 2
CMake will not be able to correctly generate this project.
Please check
xcode-select -p, make sure it points to Xcode 8 installation and run xcode-select --install after that.
I had to accept the license dialog after starting Xcode to get the updated components. After that I restarted Clion and it worked.

error: linking with `cc` failed: exit code: 1

I have a single .rs file. When I compile it by rustc test1.rs, I get an error:
error: linking with `cc` failed: exit code: 1
note: cc '-m64' '-L' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib' '-o' 'test1' 'test1.o' '-Wl,-force_load,/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libmorestack.a' '-Wl,-dead_strip' '-nodefaultlibs' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libstd-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libcollections-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libunicode-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/librand-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/liballoc-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/liblibc-4e7c5e5c.rlib' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib/libcore-4e7c5e5c.rlib' '-L' '/usr/local/Cellar/rust/1.0.0-alpha/lib/rustlib/x86_64-apple-darwin/lib' '-L' '/Users/alex/Documents/projects/rust/.rust/lib/x86_64-apple-darwin' '-L' '/Users/alex/Documents/projects/rust/lib/x86_64-apple-darwin' '-lSystem' '-lpthread' '-lc' '-lm' '-lcompiler-rt'
note: ld: warning: directory not found for option '-L/Users/alex/Documents/projects/rust/.rust/lib/x86_64-apple-darwin'
ld: warning: directory not found for option '-L/Users/alex/Documents/projects/rust/lib/x86_64-apple-darwin'
ld: can't open output file for writing: test1, errno=21 for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: aborting due to previous error
$ rustc --version
rustc 1.0.0-dev
I've seen some topic related to this one but none of them helped me to solve the problem.
I was faced with three problems on Mac compiling Rust:
First: If you have any issue with writing files/dirs by ld just remove that files and try to recompile. I don't know why, but on Mac this issue happens time to time.
Second: If you have other ld errors (not about file access): try to add the following sections to your ~/.cargo/config (if you don't have this file feel free to create):
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
[target.aarch64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
Third: Sometimes your Mac lack of some dev tools/dependencies. Install the most important of them automatically with the command:
xcode-select --install
From your command rustc test1.rs the compiler infers the name of the executable should be test1. The linker tries to open this file so it can write the executable but fails with errno=21 whose stringified version is "Is a directory".
This suggests you have a directory in your working directory called test1 which is causing a conflict.
if you have "note: /usr/bin/ld: cannot find -lsqlite3"
then install libsqlite3-dev: $ sudo apt install libsqlite3-dev
This works on Rust 1.53.0, Linux Mint 20.2(based on Ubuntu 20.04 LTS)
If you have a MacBook M1(x) with ARM processor you need to install rust from rustup https://sourabhbajaj.com/mac-setup/Rust/
When you run rustup-init, use the customize option to change aarch64-apple-darwin to x86_64-apple-darwin
Then you can add the following to .cargo/config.toml or .cargo/config (either is fine)
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
This solution was tested with Rust 1.54 and MacBook M1
I was able to do a cargo build --release and generate a dylib file from this tutorial https://www.youtube.com/watch?v=yqLD22sIYMo
My rust project stopped building after updating my MacOS so this command fixed it for me
xcode-select --install
I had the same issue recently and I found out this solution that worked for me:
https://www.docker.com/blog/cross-compiling-rust-code-for-multiple-architectures/
On Running Rust on aarch64 I found out that libc6-dev-arm64-cross is need in order to compile rust successfully on aarch64.

Error when compiling Glog

I got an issue when compiling glog by running 'make' after running './configure'
Then I got an error:
Undefined symbols for architecture x86_64:
"testing::internal::StrStreamToString(std::__1::basic_stringstream, std::__1::allocator >)", referenced from:
testing::internal::String testing::internal::StreamableToString(void const const&) in logging_unittest-logging_unittest.o
testing::internal::String testing::internal::StreamableToString(int const&) in logging_unittest-logging_unittest.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: * [logging_unittest] Error 1
I am using glog-0.3.3 on Mac OS X.
SO how can i turn of testing while compiling glog?
In another context, i installed glog and gflags by using Macport, then i run a small program. It will generate a error :
"ERROR: unknown command line flag 'logtostderr'"
I believe that's the problem with linking to gflags. So how can i fix it. Thanks
GLog needs GFlags compiled in the "google" namespace instead of the now default "gflags" namespace.
In order to set this namespace you must compile and install gflags from source and set the GFLAGS_NAMESPACE variable to "google".
Here are the steps I followed in Kubuntu 14.04 and should be similar to what you should do in Mac OSX. These will place the GFlags source in /usr/local/src and install the library in the /usr/local/lib&include directories. The last command (ldconfig) registers the library in the system.
cd /usr/local/src/
cp /path/to/downloaded/gflags-2.1.1.tar.gz .
sudo tar xzf gflags-2.1.1.tar.gz
cd /tmp
mkdir buildgflags
cd buildgflags
cmake -DCMAKE_INSTALL_PREFIX=/usr/local -DBUILD_SHARED_LIBS=ON \
-DGFLAGS_NAMESPACE=google -G"Unix Makefiles" /usr/local/src/gflags-2.1.1/
make
sudo make install
sudo ldconfig
Alternatively you can apply the following patch in the GLog source (attached in the last reply):
https://code.google.com/p/google-glog/issues/detail?id=194
It basically uses the namespace of gflags after the includes on the GLogs unit test source files like so:
#ifdef HAVE_LIB_GFLAGS
#include <gflags/gflags.h>
using namespace gflags;
#endif

dtrace: failed to compile script Preprocessor not found

I'm trying to test this script from oracle to get active NFS clients on Ubuntu 10.04, but I can' get it to run.
To achieve that, I first installed dtrace following these instructions. This is what I've done exactly:
apt-get install bison flex zlib1g-dev libelf-dev binutils-dev libdw-dev libc6-dev-i386
wget ftp://crisp.dyndns-server.com/pub/release/website/dtrace/dtrace-20121009.tar.bz2
tar xfj dtrace-20121009.tar.bz2
cd dtrace-20121009
make all
make install
make load
However, I get this warning when compiling:
=================================================================
=== You need /usr/lib/libdwarf.a and /usr/lib/libbfd.a installed to build.
===
=== apt-get install binutils-dev
=== apt-get install libdw-dev
===
=== Without these, we will not build ctfconvert (needed for
=== SDT structure definitions).
=================================================================
cd cmd/instr ; make --no-print-directory
cd usdt/c ; make --no-print-directory
tools/mkdriver.pl all
Executing: /usr/src/dtrace/dtrace-20121009/tools/make-me
make -C /lib/modules/2.6.38-16-server/build M=/usr/src/dtrace/dtrace-20121009/build-2.6.38-16-server/driver
CC [M] /usr/src/dtrace/dtrace-20121009/build-2.6.38-16-server/driver/systrace.o
LD [M] /usr/src/dtrace/dtrace-20121009/build-2.6.38-16-server/driver/dtracedrv.o
Building modules, stage 2.
MODPOST 1 modules
LD [M] /usr/src/dtrace/dtrace-20121009/build-2.6.38-16-server/driver/dtracedrv.ko
tools/mkctf.sh
build/ctfconvert not available - so not building the linux.ctf file
NOTE: The build is complete, but build/ctfconvert is not available.
This means you will get run time errors from the io.d and sched.d files
due to undefined kernel structure definitions. Simply delete or rename
these files until a fix can be put in place to handle older
distros which do not have the required libdwarf dependencies.
(Typical error is references to undefined struct definitions such
as dtrace_cpu_t).
sync
I've installed libdw-dev and binutils-dev, but taking a look at the makefile, it seems it looks for libdwarf.so, and libdw on my system is named libdw.so.
To circunvent this, I create a symlink with ln -s /usr/lib/libdw.so /usr/lib/libdwarf.so. After doing so, compiling fails.
cd cmd/ctfconvert ; make --no-print-directory
gcc -g -I. -I../../ -I../../libctf -I../../common -I../../uts/common -I../../linux -I/usr/include/libdwarf -c dwarf.c
In file included from dwarf.c:94:
/usr/include/dwarf.h:56: error: expected identifier before numeric constant
/usr/include/dwarf.h:136: error: expected identifier before numeric constant
/usr/include/dwarf.h:321: error: expected identifier before numeric constant
/usr/include/dwarf.h:461: error: expected identifier before numeric constant
/usr/include/dwarf.h:517: error: expected identifier before numeric constant
make[3]: *** [../../build/ctfconvert.obj/dwarf.o] Error 1
make[2]: *** [all] Error 2
make[1]: *** [do_cmds] Error 2
tools/bug.sh
make: *** [all] Error 1
So, let's undo that. I remove the symlink, compile again, run make install and make load and hope everything is fine. And everything seems to be fine.
But, then I try to run the script mentioned above, and it fails:
# ./get_ngs_clients.d
dtrace: failed to compile script ./get_ngs_clients.d: Preprocessor not found
I have no clue on what's going on. I have gcc installed, just in case.
# dpkg -l | grep gcc
ii gcc 4:4.4.3-1ubuntu1 The GNU C compiler
ii gcc-4.4 4.4.3-4ubuntu5.1 The GNU C compiler
ii gcc-4.4-base 4.4.3-4ubuntu5.1 The GNU Compiler Collection (base package)
ii gcc-4.4-multilib 4.4.3-4ubuntu5.1 The GNU C compiler (multilib files)
ii gcc-multilib 4:4.4.3-1ubuntu1 The GNU C compiler (multilib files)
ii lib32gcc1 1:4.4.3-4ubuntu5.1 GCC support library (32 bit Version)
ii libgcc1 1:4.4.3-4ubuntu5.1 GCC support library
If you do not have libdwarf.a on your system, the ctfconvert tool will not build. (libdwarf.a and libdw.a are not the same).
If ctfconvert does not build, then any of your own, or the dtrace etc/*.d scripts may not load. (Dtrace force loads these scripts for you automatically, which is annoying). Any script which relies on structure definitions will then fail.
As of May 2013, I am looking at seeing what it takes to update to libdw.a since this seems to be the modern replacement for libdwarf.
(posted by the 'author' of DTrace/Linux).
Have you tried to add --enable-dtrace=false to /.configure?
Or maybe --with-dtrace=false?
That should do the trick I think...

Resources