I am trying to use a RHEL image for my project. Is there a way to create a base image with my RHEL 6's iso file? I am not using Fedora as it is beneficial for my project to use the RHEL distribution instead.
If you want a fair bit of control, you may want to look into the script in the Docker contrib repository on Github.
One of them is for "yum" based images, Centos in this case, but there's no reason it shouldn't work for RHEL.
It bootstraps itself through a series of YUM installs. If you look at the usage:
OPTIONS:
-p "<packages>" The list of packages to install in the container.
The default is blank.
-g "<groups>" The groups of packages to install in the container.
The default is "Core".
-y <yumconf> The path to the yum config to install packages from. The
default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
you can see that you can provide your own yum.conf, which you can set to use your ISO as source.
Related
Say you regularly use a large python dependency like tensorflow but you want to create siloed virtual environments for each separate project.
If I download and install tensorflow to my system using pip, is there a way to tell pipenv to use the previously downloaded dependency instead of doing a slow and high-bandwidth re-download per virtual environment I set up?
I would like to create a Conda environment from a .yaml file on an offline machine (i.e. no Internet access). On an online machine this works perfectly fine:
conda env create -f environment.yaml
However, it doesn't work on an offline machine as the packages are then not found. How do I do this?
If that's not possible is there another easy way to get my complete Conda environment to an offline machine (including both Conda and pip installed packages)?
Going through the packages one by one to install them from the .tar.bz2 files works, but it is quite cumbersome, so I would like to avoid that.
If you can use pip to install the packages, you should take a look at devpi, particutlarily its server. devpi can cache packages normally installed from PyPI, so only on first install it actually retrieves them. You have to configure pip to retrieve the packages from the devpi server.
As you don't want to list all the packages and their dependencies by hand you should, on a machine connected to the internet:
install the devpi server (I have that running in a Docker container)
run your installation
examine the devpi repository and gathered all the .tar.bz2 and .whl files out of there (you might be able to tar the whole thing)
On the non-connected machine:
Install the devpi server and client
use the devpi client to upload all the packages you gathered (using devpi upload) to the devpi server
make sure you have pip configured to look at the devpi server
run pip, it will find all the packages on the local server.
devpi has a small learning curve, which already worth traversing because of the speed up and the ability to install private packages (i.e. not uploaded to PyPI) as a normal dependency, by just generating the package and upload it to your local devpi server.
I guess that Anthon's solution above is pretty good but just in case anybody is interested in an easy solution that worked for me:
I first created a .yaml file specifying the environment using conda env export > file.yaml. Following the instructions on http://support.esri.com/en/technical-article/000014951, I automatically downloaded all the necessary installation files for conda installed packages and created a channel from the files. For that, I just adapted the code from the link above to work with my .yaml file instead of the conda list file they used. In addition, I automatically downloaded the necessary files for the pip installed packages by looping through the pip entries in the .yaml file and using pip download for downloading each of them. Furthermore, I automatically created separate conda and pip requirement lists from the .yaml file. Then I created the environment using conda create with the offline flag, the file with the conda requirements and my custom channel. Finally, I installed the pip requirements using pip install with the pip requirements file and the folder containing the pip installation files for the option --find-links.
That worked for me. The only problem is that you can only download binaries with pip download if you need to specify a different operating system than the one you are running, and for some packages no binaries are available. That was okay for me now as the target machine has the some characteristics but might be problem in the future, so I am planning to look into the solution suggested by Anthon.
I am trying to install any GUI desktop on the RHEL7, what originally does not have any.
There was even no ntfs-3g installed to work with NTFS file system.
really "nice" build is this RHEL7!!!!
and i m using the EPEL - repos for that in :
http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
while another the link to
http://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
is not working.
So i installed the epel-release-latest-7.noarch.rpm with rpm.
But i do not have the group 'Server with GUI'in yum there, (what is recommended to install).
trying install gnome-desktop, x windows system, xfce, whatever i get the message about lots of unsatisfied dependencies like:
lots with libgtk*
libX11*
gk* whatever
so i guess the dependencies for GNOME are missing.
Yeah, so many libs are not included in the RHEL7 from the beginning, bravo to this version.
I guess there is no info about the dependencies in the epel package or
even in gnome-desktop-2.32.0-17.el7.x86_64.rpm.
Can someone please tell where from download all (means realy all) dependencies for GNOME Desktop or any other desktop?
before this issue i guessed, every package must have list of all dependencies to it , otherwise it is just... stupid
You don't need the EPEL repository for GUI on RHEL. All required packages, including all necessary dependencies, are a part of the default RHEL repositories.
Make sure that your installation is properly registered and subscribed (see How to register and subscribe a system to the Red Hat Customer Portal using Red Hat Subscription-Manager).
Then you should be able to install a GUI environment of your choice without any outside dependencies, i.e. it should be possible to execute, for example:
# yum groupinstall "Server with GUI"
See more detailed info at How to install a graphical user interface (GUI) for Red Hat Enterprise Linux.
I want to install Logstash for NodeJs on windows 7, but I am not able to find proper steps for the same.
Can any one please help!
There is the option of node-logstash if you want a node.js alternative to Logstash. This isn't something I'm using myself (I'm using nxlog in Windows instead) but it looks like a decent alternative to the standard JRuby Logstash if you need to forward logs from Windows.
Instructions from the readme are below:
Installation
Install NodeJS, version >= 0.10, or io.js.
Install build tools
Debian based system: apt-get install build-essential
Centos system: yum install gcc gcc-c++ make
Install zmq dev libraries: This is required to build the node zeromq module.
Debian based system: apt-get install libzmq1. Under recent releases, this package is present in default repositories. On ubuntu lucid, use this ppa. On debian squeeze, use backports.
Centos 6: yum install zeromq zeromq-devel. Before, you have to add the rpm zeromq repo : curl http://download.opensuse.org/repositories/home:/fengshuo:/zeromq/CentOS_CentOS-6/home:fengshuo:zeromq.repo > /etc/yum.repos.d/zeromq.repo
Clone repository: git clone git://github.com/bpaquet/node-logstash.git && cd node-logstash
Install dependencies: npm install.
The executable is in bin/node-logstash-agent
You have scripts in dists folder to build packages. Actually, only debian is supported.
As per the comment, logstash has nothing to do with nodejs.
What you're looking to do is install Logstash on Windows, something that you can find out about by using google, there will be loads of guides out there describing how to do this.
You would then need to configure logstash to look in the right location for the log files it needs to process, and then set up filters to handle nodejs style logs (which as far as I understand aren't very well standardised). You then need to configure an output (logstash is essentially a unix pipe on steroids and needs somewhere to save the logs it has processed). Elasticsearch is the most common thing to save logs to.
Personally, in my environment, I would install logstash on a CentOS server, as it's a well established process, and ship the logs from your Windows 7 machine to the logstash server using either logstash forwarder or nxlog. That way you can have logs coming in from a number of different sources and you can still reboot your Windows machine every few days as required by Windows update without your logstash server going down.
I have a Ubuntu (12.04 LTS) install for my desktop, and I have two VPS servers that run Ubuntu (11.04 LTS) as well. I have PHP running on these servers using fcgi, but I want to upgrade to the lastest version of PHP (5.4.3) and include the modules that I need baked right in. It just so happens that the regular ./configure script happens to include all of the things that I need. So from here, I want to make a deb package that I can use on my two VPS servers so that I can quickly install it using apt-get install php. What do I have to do in order for this to happen?
I would be making the package from the desktop installation that I have (Ubuntu 12.04 LTS) and distributing them to my servers via ftp or setting up a lunchpad account. The desktop is a stock install, and the only extra thing that I added was the lib2xml-dev so that I could compile php. The servers are also bare, only running 10 proccess, including nginx, and php-cgi.
Download and build the source package from Debian testing; they currently seem to be on PHP 5.4.4. (You may need to add some backports etc, though.) Set up your own repository and add it to /etc/apt/sources.list.d on the servers. You may need to build on a 11.04 box in order to be able to install on 11.04 (or play tricks with versioned dependencies).