I'm trying to use pp (the perl compiler) to create an application that can run independent of the perl installed library and interpreter.
It successfully creates a compiled executable although I had to use the -x -c options to get it to find dependencies successfully. It will run on my machine but when I try it on another machine I get this error so clearly there is still some dependency:
501 Protocol scheme 'https' is not supported (LWP::Protocol::https not installed)
I am running it on MacOS 10.14.1 if that makes any difference. Thanks!
LWP::Protocol::https is loaded dynamically when needed, so pp has no way of knowing it's needed by default.
Solution 1
Pass -x to pp, and make sure the module is actually loaded in the run pp uses to determine the modules to include. This would probably be achieved by using LWP to make an HTTPS request during that run. --xargs=... might come in useful for this.
Solution 2
Pass -M LWP::Protocol::https to pp. You could also pass -M 'LWP::Protocol::**' to get all protocols handlers you have installed.
Solution 3
Add use LWP::Protocol::https (); to your script or an included module. Including a comment indicating why you are doing this would be appropriate.
You were building Net::SSLeay on MacOS 10.14 linking it to libssl.44.dylib which is not present on MacOS 10.12 where you try to run it.
I've found it annoying having to switch between build and test systems to find out which of the libraries are missing or incompatible and need to be packed.
I am now using the following strategy:
I use perlbrew instead of system perl.
For alien dependencies I use homebrew instead of the system libraries.
I build the packed executable using pp and run the resulting program with export DYLD_PRINT_LIBRARIES=YES being set (on the development machine)
I examine the list of loaded libraries and add all those referenced in the homebrew directory tree (/usr/local/opt/ and /usr/local/cellar/in my case) using pp -l /full/path/name -l ...
I rebuild the executable.
I still check on a target machine before deploying, but chances are very high now that it just works.
Related
I have a C++ program where I need the current path to later create a folder. The location of my executable is, let's say /home/me/foo/bin. This is what I run:
//Here I expect '/home/me/foo/bin/', but get ''
auto currentPath = boost::filesystem::current_path();
//Here I expect '/home/me/foo/', but get ''
auto parentPath = currentPath.parent_path();
//Here I expect '/home/me/foo/foo2/', but get 'foo2/'
string subFolder = "foo2";
string folderPath = parentPath.string() + "/" + subFolder + "/";
//Here I expect to create '/home/me/foo/foo2/', but get a core dump
boost::filesystem::path boostPath{ folderPath};
boost::filesystem::create_directories( boostPath);
I am running on Ubuntu 16.04, using Boost 1.66 installed with the package manager Conan.
I used to run this successfully with a previous version of Boost (1.45 I believe) without using Conan. Boost was just normally installed on my machine. I now get a core dump when running create_directories( boostPath);.
Two questions:
Why isn't current_path() providing me with the actual path, and returns and empty path instead?
Even if current_path() returned nothing why would I still have a core dump even if I run it with sudo? Wouldn't I simply create the folder(s) at root?
Edit:
Running the compiled program, having some cout outputs of the above variables in between the lines rather than using debug mode, normally gives me the following output:
currentPath: ""
parentPath: ""
folderPath: /foo2/
Segmentation fault (core dumped)
But sometimes (about 20% of the times) gives me the following output:
currentPath: "/"
parentPath: "/home/me/fooA�[oFw�[oFw#"
folderPath: /home/me/fooA�[oFw�[oFw#/foo2/
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
what(): boost::filesystem::create_directories: Invalid argument
Aborted (core dumped)
Edit 2:
Running conan profile show default I get:
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=5
compiler.libcxx=libstdc++
build_type=Release
[options]
[build_requires]
[env]
There is some discrepance between the libcxx used in the dependencies, and the one that you are using to build your application.
In g++ (linux) there are 2 standard library modes you can use, libstdc++, built without C++11 enabled, and libstdc++11, built with C++11 enabled. When you are building an executable (application or shared library), all the individual libraries linked together must link with the same libcxx.
libstdc++11 was made the default for g++ >= 5, but this also depends on the linux distro. It happens that even if you install a g++ >=5 in older distros like Ubuntu 14, the default libcxx will still be libstdc++, apparently it is not easy to upgrade it without breaking. It also happens that very popular CI services used in open-source, like travis-ci, used older linux distros, and thus libstdc++ linkage was the most popular.
libstdc++ was the default for g++ < 5.
For historical and backwards compatibility reasons, conan default profile always use libstdc++, even for modern compilers in modern distros. You can read your default profile the first time conan is executed, but also find it as a file in .conan/profiles/default, or show it with conan profile show default. This will likely change in conan 2.0 (or even sooner), and the correct libcxx will be detected for each compiler if possible.
So, if you are not changing the default profile (using your own profiles is recommended for production), then when you execute conan install, the depedencies which are installed are built against libstdc++. Note that this conan install is independent on the build in most cases, it just downloads, unzip and configure the dependencies you want, with the requested configuration (from the default profile).
Then, when you are building, if you are not changing _GLIBCXX_USE_CXX11_ABI, then you can be using your system compiler default, in this case, libstdc++11. In most cases, a linking error appears that shows this discrepance. But in your case you were unlucky, and your application managed to link, but then crashed at runtime.
There are a couple of approaches to solve this:
Build your application with libstdc++ too. Make sure to define _GLIBCXX_USE_CXX11_ABI=0.
Install your dependencies for libstdc++11. Edit your default profile to use libstdc++11, then issue a new conan install and rebuild your app.
I would like to use pigz to compress massive tar archives.
I am using cygwin. Unfortunately, pigz is not one of the standard cygwin packages.
Anyone know how to install pigz under cygwin?
Below are the 2 techniques I tried without success:
1) The README on this webpage (or in the README file, if you download the source from here) says that you should be able to build it from source merely by
Type "make" in this directory to build the "pigz" executable.
When I do that on my machine, I get a ton of warnings starting with
pigz.c:2950:20: warning: unknown conversion type character 'j' in format [-Wformat=]
(intmax_t)g.in_tot, (intmax_t)len, tag);
and then this final error:
gcc -o pigz pigz.o yarn.o try.o deflate.o blocksplitter.o tree.o lz77.o cache.o hash.o util.o squeeze.o katajainen.o -lm -lpthread -lz
pigz.o:pigz.c:(.text+0xd4f8): undefined reference to `fsync'
collect2.exe: error: ld returned 1 exit status
make: *** [pigz] Error 1
That about exhausts my ability to build programs from source...
2) It looks like there is an old 2015 port of pigz version 2.3.3 to Cygwin Ports, the expanded cygwin package repository.
But that version out of date (the latest pigz is 2.4). Indeed, it looks like Cygwin Ports has migrated to github and searching there for pigz there finds nothing.
I am not even sure how to use Cygwin Ports! The project's homepage merely says
Follow the normal Cygwin installation instructions in order to install
any of the packages currently maintained by this project.
I assume that that means to run cygwin's setup-x86.exe, but when it asks you to "Choose A Download Site" you will need to enter some URL for Cygwin Ports.
Web searching found little information. This link says to use http://sourceware.org/cygwinports/ but setup-x86.exe soon generated an error for that URL. The instructions in this link also did not work for me.
The C99 standard specifies the j specifier for printf(). (Note that the 99 refers to 1999. It is now 2018.) You can force the pigz compilation to not assume C99 by changing __STDC_VERSION__-0 >= 199901L || __GNUC__-0 >= 3 to 0. Then it won't try to use j.
Please let me know what the values of __STDC_VERSION__, __GNUC__, and __GNUC_MINOR__ are for your compiler.
Also pigz requires POSIX compliance, which would provide the fsync() call. You can just delete the reference to fsync(), which would just result in the --synchronous and -Y options having no effect.
To follow up on comments above that I had with #varro and matzeri, I can now answer my own question: my suspicion was correct: RTools was the culprit. I found that if I temporarily removed all RTools elements from my Windows Path env var (for me: c:\Rtools\bin and c:\Rtools\mingw_32\bin), then I was able to get pigz make to work.
After doing this Path edit, I uninstalled my existing cygwin, reinstalled cygwin, installed my usual extra packages (chere, openssh, subversion, zip, unzip) and all their dependencies, installed make and all its dependencies, installed gcc-core (is the C compiler) and all its dependencies. At that point, I was able to make pigz perfectly.
There is a much easier way than compiling yourself. I had the same problem, and with a little bit of research found multiple ready-made .exe files (pigz.exe) for direct usage in Windows. I am using this one:
https://sourceforge.net/projects/pigz-for-windows/files/
The OP's main concern was: "I would like to use pigz to compress massive tar archives.", and I hope that this is a useful answer to that concern, although it does not explain how to get around the compiling problems.
Some additional notes:
The interesting thing that some folks may not be aware of is that nothing keeps us from using normal Windows binaries from within Cygwin, and vice versa. That is, even if the OP had sophisticated Cygwin / bash (or whatever) scripts which drive pigz and the whole process of compressing, he could use the ready-made pigz native Windows version linked above.
With or without Cygwin, there is no need to compile pigz yourself, unless you want the latest features or bug fixes.
Personally, I am using the native Windows pigz version from within Cygwin since a while. AFAIK, pigz has no progress bar, which is somehow inconvenient for me (from time to time I have to compress a single huge file (around 60 GB)). A convenient way to get around this is the pv utility. Since I haven't found a native Windows version of it, and since I am too lazy to compile it for Windows myself, I am using Cygwin's pv to display the progress when I let the native Windows pigz compress those huge files.
I've been trying to install LuaJIT on Windows 10 for some time following the official guide, and I actually get to install it. For example, if I execute luajit I get into the prompt. Also, luajit -v returns the version of luajit (2.0.4). And I can also execute code with luajit -e <lua code>. However, whenever I try to save bytecode with luajit -b, I get the following message:
luajit: unknown luaJIT command or jit.* modules not installed
I tried to make all sort of installations: using Cygwin, luajit-rocks, MinGW, ... However, no matter what I try, I always get the same result, and I have no clue of what to do.
Could you point me to some potential problems I might be overlooking?
I have on my system Lua 5.1 and Luarocks.
Some extra LuaJIT features are implemented as separate Lua modules (e.g. jit.bcsave for bytecode saving), and LuaJIT depends on package.path to find those modules. The suggested install location for those modules is in the default package.path, but if you override it via the LUA_PATH environment variable, you have to make sure to include that location there. One easy way to do that is to put two consecutive semicolons into LUA_PATH: Double semicolons are replaced by the compile-time default value of package.path.
You need place modules to "jit" folder near with juajit.exe. That folder include some system modules (bcsave too). package.path can dont work, becouse it hardlinked, how i understand. That folders distributed with source code.
Download lua from official sice: https://luajit.org/download.html
You can see "jit" folder inside archive:
LuaJIT-2.0.5.zip\LuaJIT-2.0.5\src\jit\
I'm trying to help build a Ruby wrapper around Tensorflow using Swig. Currently, I'm stuck at making a shared build, .so, and exposing its C/C++ headers to Ruby. So the question is: How do I build a libtensorflow.so shared build including the full Tensorflow library so it's available as a shared library on OSX El Capitan (note: /usr/lib/ is read-only on El Capitan)?
Background
In this ruby-tensorflow project, I need to package a Tensorflow .bundle file, but whenever I irb -Ilib -rtensorflow or try to run the specs rspec, I get and errors that the basic numeric types are not defined, but they are clearly defined here.
I'm guessing this happens because my .so-file was not created properly or something is not linked as it should. C++/Swig/Bazel are not my strong sides, I'd like to focus on learning Tensorflow and building a good wrapper in Ruby, but I'm pretty stuck at this point getting to that fun part!
What I've done:
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
bazel build //tensorflow:libtensorflow.so (wait 10-15min on my machine)
Copied the generated libtensorflow.so (166.6 MB) to the /ext-folder
Run the ruby extconf.rb, make, and make install described in the project
Run rspec
In desperation, I've also gone through the official installation from source several times, but I don't know if that, the last sudo pip install /tmp/tensorflow_pkg/tensorflow-0.9.0-py2-none-any.whl-step even creates a shared build or just exposes a Python interface.
The guy, Arafat, who made the original repository and made the instructions that I've followed, says his libtensorflow.so is 4.5 GB on his Linux machine – so over 20X the size of the shared build on my OSX machine. UPDATE1: he says his libtensorflow.so-build is 302.2 MB, 4.5GB was the size of the entire tensorflow folder.
Any help or alternative approaches are very appreciated!
After more digging around, discovering otool (thanks Kristina) and better understanding what a .so-file is, the solution didn't require much change in my setup:
Shared Build
# Clone source files
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
# Build library
bazel build //tensorflow:libtensorflow.so
# Copy the newly shared build/library to /usr/local/lib
sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib
Calling from Ruby using Swig
Follow the steps here, https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow, to run Swig, create a Makefile and make
When you run make you should see a line saying:
$ make
$ linking shared-object libtensorflow.bundle
If your shared build is not accessible you'll see something like:
$ ld: library not found for -ltensorflow
Simple tutorial
For those starting on this adventure, using C/C++ libraries in Ruby, this post was a good tutorial for me: http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/
I don't think you actually want a .so, I think you want a .dylib (see What are the differences between .so and .dylib on osx?). You're forcing Bazel to build a .so by specifying libtensorflow.so as the target, build this instead:
bazel build //tensorflow
(//tensorflow is shorthand for //tensorflow:tensorflow, which is "build the tensorflow target." Specifying an exact file you want forces Bazel to build that file, if possible.)
Once you have a .dylib, you can check its contents with otool:
otool -L bazel-bin/tensorflow/libtensorflow.dylib
Not sure if this will solve all your problems, but worth a try.
The problem:
I can't seem to install perl modules correctly, JSON-2.53 in particular.
I have done the following:
Searched for a similar problem and tried its solution - did not work.
perl ".../config.h, needed by `Makefile'" not working after OSX Lion upgrade
Installed XCode command line developer utilities (c compiler, make, etc)
Read version compatibility documentation on this particular perl module: http://metacpan.org/pod/JSON
Ran the following commands to make and install the desired perl module:
$perl Makefile.PL
Welcome to JSON (v.2.53)
If you install JSON::XS v.2.27, it makes JSON faster.
************************** CAUTION **************************
This is 'JSON version 2' and there are many differences *
to version 1.xx *
Please check your applications useing old version. *
See to 'INCOMPATIBLE CHANGES TO OLD VERSION' and 'TIPS' *
Writing Makefile for JSON
(verified that the Makefile has been written)
$make
make: *** No rule to make target `/System/Library/Perl/5.12/darwin-thread-multi-2level/CORE/config.h', needed by `Makefile'. Stop.
What does that error even mean? What can I do to successfully make install this module?
Here are some additional items that may help you assist me in debugging this issue:
$which make
/Applications/Xcode.app/Contents/Developer/usr/bin/make
$which perl
/usr/bin/perl
$perl -v
This is perl 5, version 12, subversion 3 (v5.12.3) built for darwin-thread-multi-2level
I think you need to download and reinstall XCode. If I recall correctly for 10.7, after downloading Xcode from the app store it drops an installer into your Applications folder. You need to run it and try installing the command line tools again (from Xcode's prefernces pane). I know you mentioned you did this already, but a bit more background might explain why it's worth another try.
Here are the relevant lines in the Makefile from my Mac:
PERL_INC = /System/Library/Perl/5.12/darwin-thread-multi-2level/CORE
# Where is the Config information that we are using/depend on
CONFIGDEP = $(PERL_ARCHLIB)$(DFSEP)Config.pm $(PERL_INC)$(DFSEP)config.h
Later on in the Makefile CONFIGDEP is used as a dependency in a target. I believe in your case make is looking for /System/Library/Perl/5.12/darwin-thread-multi-2level/CORE/config.h and can't find it. The error you're seeing is make's obtuse way of saying file not found.
config.h contains specific information about the OS but is not needed for running scripts. It's only referenced when you want to compile a module. With stock OSX you get enough perl to execute scripts. Install XCode and you get the bits (like config.h) to do perl "development". I use quotes because you can write and run perl scripts without Xcode. But as you discovered, compiling a module requires the additional files Xcode provides. (Incidentally, RedHat does the same thing. You have to install the perl-devel package to get config.h. The perl runtime is in a separate package.)
Here are some things you can try:
Verify /System/Library/Perl/5.12/darwin-thread-multi-2level/CORE/config.h exists. If not, Xcode command line utilities were not installed properly. Try it again.
If config.h exists, check its content and make sure it looks sane. It's a C header file and consists of comments and #define statements.
If you don't have access to view config.h, you have a permission issue. Try using sudo make as a bypass. Disk Utility (found in Applications -> Utilities) might be able to permanently fix this.
You could risk changing the Makefile by removing "$(PERL_INC)$(DFSEP)config.h" from CONFIGDEP. I did this on my 10.8 Mac and it worked without issue (it passed all tests as well). However, if you don't find the root cause of your config.h issue, the next time you want to install a perl module you may find yourself right back where you started.
I had this exact same error, whilst this may not be a solution for you.... after reinstalling an updated xcode compatible with the OSX version (+rebooting after the install) I still had the error - to cut a long story short I noticed there was no config.h in /CORE/ after the error.....the solution that worked was to touch config.h and create the file first and then re-run the make. Hope this helps someone.