This question already has answers here:
Multiple glibc libraries on a single host
(11 answers)
Closed 6 years ago.
Is there any way to install a new version of GLIBC locally in a folder? I will be able to add its library paths to LD_LIBRARY_PATH later without interfering with the system libraries?.
I didn't find such solution. For me that means all system should be upgraded for this purpose which is not I want.
P.S: Someone stated something here, but without any detail.
Is there any way to install a new version of GLIBC locally in a folder?
Yes, see this answer.
I will be able to add its library paths to LD_LIBRARY_PATH later without interfering with the system libraries?.
As the answer explains, you can't do that.
Yes, you can do it; I'd advise to do it with the permissions of the ordinary user to not break the whole system accidentally. It should be easier to unpack the package manually and not use OS package setup tools.
Then you should be able to add the path to LD_LIBRARY_PATH and it should be preferred over the system lib dir. You can use ldd command to see what runtime libraries will actually be used. It should show no not found entries, if it does you are missing something.
You'll need some experiments to make it actually work, but in this way you will break one console session at a time, which you can easily restart. An alternative way might be a hardlinked chroot environment, but that looks as an overkill.
Related
I made an application using GTK3 on Windows (Mingw_x64 installation of GTK) and I cannot really figure out how to make a distribution out of this. According to official documentation of PyGObject, it is possible in some way.
I already tried to make a package using setuptools, but PyGObject documentation is not saying much about this process and I was not able to configure setup correctly to make it work. PyGObject has a lot of dependecies and weird imports, that I do not know how to include.
I also tried Pyinstaller, which claims it has GTK support, and it really can pack it into executable, however it is not working. I tried these two options:
make only one file (.exe), but in this situations, it throws an error, that some file is not found (libpixbufloader-ani.dll)
create a directory with all needed files (libpixbufloader-ani.dll and other libs are included this time), but when running exe, another exeption occurs, this time Struct and 2 other libraries are missing (strangely, there is a folder that contains Struct)
Becouse of the missing files, I tried adding as many paths containing needed libraries as possible to Pyinstaller, but without success.
Does anyone have any experience with packaging GTK appliciations in Python? There is definitely a way to do this, but I am not very experienced with packaging. If needed, I can provide more information.
This is an issue that has been brought up on PyInstaller's GitHub page, as others (including myself) have experienced the same issue that you've mentioned.
The last time I tried the dev version of PyInstaller, the issue still wasn't fixed, but I managed to get a working executable by using PyInstaller to find the dependencies that my Python3/GTK3 app needed, and then I used cx_Freeze to generate the final executable.
I am currently attempting to install the tensorflow object detection app on Windows 7 (employer requirement) and I am failing at a few steps from the end.
Basically I get the following error when I run the installation test command:
ImportError: No module named nets.
I have read some solutions on the subject:
https://github.com/tensorflow/models/issues/729
https://github.com/tensorflow/models/issues/1842
which looks like this:
export PYTHONPATH="$PYTHONPATH:"somepath"/tensorflow/models/slim"
basically meaning that I must set the right path in the PYTHONPATH environmental variable.
Working with Windows, I tried calling this:
SET PYTHONPATH="$PYTHONPATH:C:tensorflow/models/slim
And when it didn't work, I created a PYTHONPATH variable in system-> environmental variables.
I'm still getting the error so I suppose that I am still missing something but due to my lack of knowledge I still can't figure out what.
Would someone familiar with Windows be able to point out what's missing?
Thanks
in linux:
add export export PYTHONPATH=$PYTHONPATH:pwd:pwd/slim to ~/.bashrc
attention:you should keep single quote mark
if you work with windows, i guess it should like this:PYTHONPATH=$PYTHONPATH:'C:/tensorflow/models':'C:/tensorflow/models'/slim
just my guess, you can take a try.
good luck!
If you run the setup.py it will install all the relevant modules for object detection. The other option is download the git directory. cd to the folder and try to run the module from there. You might face protubuf issue. Try to install it before running the code. It's bit complicated to install protobuf in windows. But if you are not using ".pb" file, then you don't need to.
I figured out a way to make it work. I am not writing this as a final answer as it is mostly a workaround and due to lack of understanding from my part I cannot guarantee it will work (and also it might not be best good practice).
Anyway here it is:
As Beta previously suggested, you have to run setup.py, however running it from models folder did not do it for me, I also had to run it from object detection folder.
However there was a problem there, it generated an error saying the BUILD already existed (which was correct) so I had to delete the BUILD file from inside of model.
After that it worked, turns out the path I had set was working fine.
Now if some experts would look into this and explain how and why this workaround worked it might make this a valid solution.
I wanted to install caffe on openSuse.
Just for the record - it worked out for me, I just don't know what's the "exact" way to do this. The things I did maybe aren't really for someone who's new to this, and also it was kind of a "bad installation". My way was the following:
First, I did
make all
This worked, until it complained that some libraries weren't found (libclbas etc.). So I used
ccmake .
to change the paths to the libraries manually. I needed to manually type the paths to the snappy, boost_python, blas, cblas and lapack libs. After doing that I did
cmake .
and then
make
and everything worked. My problem now is - why doesn't make find the libs, and is there a way to fix this? I think the problem was that I didn't have /usr/lib/libcblas.so but /usr/lib/libcblas.so.3, and similar "problems" with the other libraries.
Another thing - when I tried using ccmake/cmake right from the beginning (without the make part first), there weren't any files in my build directory (like $CAFFE_ROOT/build/examples or $CAFFE_ROOT/build/tools were empty), so the mnist tutorial for example wasn't working. That's why I first called
make all
, what may seem strange to you.
Of course I know how to fix this stuff, but I would like to know how the correct way for a "clean and simple installation" is. Did is miss anything when using make/cmake, is this some kind of inconsistency in caffe or something else? And, what is the clean way to do this?
Maybe look at the Ubuntu installation guide? http://caffe.berkeleyvision.org/install_apt.html
It mentions all the different packages you might need. I couldn't find openSuse installation instructions - but you should be able to translate the apt-get commands for your platform.
Very often we need to install software from its source code. Most of the time I just hit "make world" or "make all" then it will work like a charm. But some other time we see make errors, and we need to install other packages in order to let the make go through. This is particularly a problem for compiling low-level systems, such as a Linux kernel or Xen hypervisor.
I have one experience with Xen 3.4. Maybe it has been documented in some corner documents, but it depends on udev-125 to work properly. The weird thing is it functions well most of the time when udev version is 160+, it only breaks in certain cases! It took me a few MONTHS to find out it was because of the wrong udev version!
To make developers' life easier, when a source code is made successfully in one machine, is there some tools to record the list of packages and versions of that machine? Such a 'snapshot' should be shipped with the source code as well, so that when someone meets the make error they at least have a successful 'snapshot' for reference.
Is there such a tool already?
If your software depends on a specific version of a dependency, you should write a check for your configure script/cmakefile/etc. that tests the version of the dependency and bails out if the wrong version was found.
Comparing the output of config.log (a file created by a configure script) can also help diagnose problems like you encountered.
At work I'm using Perl 5.8.0 on Windows.
When I first put Perl on, I went to CPAN, downloaded all the sources, made a few changes (in the .MAK file(?) to support threads, or things like that), and did nmake / nmake test / nmake install. Then, bit by bit, I've downloaded individual modules from CPAN and done the nmake dance.
So, I'd like to upgrade to a more recent version, but the new one must not break any existing scripts. Notably, a bunch of "use" modules that I've installed must be installed in the new version.
What's the most reliable (and easiest) way to update my current version, ensuring that everything I've done with the nmake dance will still be there after updating?
As others noted, start by installing the new perl in a separate place. I have several perls installed, each completely separate from all of the others.
To do that, you'll have to configure and compile the sources yourself. When you run configure, you'll get a chance to specify the installer. I gave detailed instructions for this in an "Compiling My Own Perl" in the Spring 2008 issue of The Perl Review. There's also an Item in Effective Perl Programming that shows you how to do it.
Now, go back to your original distribution and run cpan -a to create an autobundle file. This is a Pod document that lists all of the extra stuff you've installed, and CPAN.pm understands how to use that to reinstall everything.
To install things in the new perl, use that perl's path to start CPAN.pm and install the autobundle file you created. CPAN.pm will get the right installation paths from that perl's configuration.
Watch the output to make sure things go well. This process won't install the same versions of the modules, but the latest versions.
As for Strawberry Perl, there's a "portable" version you can install somewhere besides the default location. That way you could have the new perl on removable media. You can test it anywhere you like without disturbing the local installation. I don't think that's quite ready for general use though. The Berrybrew tool might help you manage that.
Good luck, :)
I would seriously consider looking at using Strawberry Perl.
You can install a second version of Perl in a different location. You'll have to re-install any non-core modules into the new version. In general, different versions of Perl are not binary compatible, which could be an issue if you have any program-specific libraries that utilize XS components. Pure Perl modules shouldn't be affected.
If you stay within the 5.8 track, all installed modules that contain XS (binary) extensions will continue to work, as binary compatibility is guaranteed within the same 5.8 series. If you moved to 5.10 then you would have to recompile any modules that contain XS components.
All you need to do is ensure that the new build lists the previous include directories in its #INC array (which is used to look for modules).
By the sounds of it, I think you're on Windows, in which case the current #INC paths can be viewed with
perl -le "print for #INC"
Make sure you target your new Perl version in another directory. It will happily coexist
with the previous version, and this will allow you to choose which Perl installation gets used; it's just a question of getting your PATH order sorted out. As soon as a Perl interpreter is started up, it knows where to look for the rest of its modules.
Strawberry Perl is probably the nicest distribution on Windows these days for rolling your own.
I think the answer to this involves virtualisation of some kind:
Set up an exact copy of your current live machine. Upgrade Perl, using the same directory locations and structures as you're using at the moment.
Go through your scripts testing them on the new image.
Once you're happy, flip the switch.
The thinking behind this is that there's probably all sorts of subtle dependencies and assumptions you haven't thought of. While unlikely, the latest version of a particular module (possibly even a core module, although that's even more unlikely) might have a subtle difference compared to the one you were using. Unless you've exhaustively gone through your entire codebase, there's quite possibly a particular module that's required only under certain circumstances.
You can try and spot this by building a list of all your scripts - a list that you should have anyway, by dint of all your code being under version control (you are using version control, e.g. Subversion, yes?) - and iterating through it, running perl -c on each script. e.g. this script. That sort of automated test is invaluable: you can set it running, go away for a coffee or whatever, and come back to check whether everything worked. The first few times you'll probably find an obscure module that you'd forgotten about, which is fine: the whole point of automating this is so that you don't have to do the drudge-work of checking every single script.
When I did it I installed the newer one into a separate directory. There's a bit of added confusion running two versions, but it definitely helps make sure everything's working first, and provides a quick way of switching back to the old one in a pinch. I also set up Apache to run two separate services, so I could monkey around with the newer Perl in one service without touching the production one on the old Perl.
It's probably a lot wiser, in hindsight, to install on a separate computer, and do your testing there. Record every configuration change you need to make.
I am not sure about building it yourself—I always just used prepackaged binaries for Windows.
I'm not sure I understand exactly what you're asking. Do you have a list of changes you made to the 5.8 makefile? Or is the question how to obtain such a list? Are you also asking how to find out which packages above the base install you've obtained from CPAN? Are you also asking how to test that your custom changes won't break those packages if you get them from CPAN again?
Why don't you use ActivePerl and its "ppm" tool to (re)install modules?