I have an RHN Satellite 6.4 Server and have installed rhel-system-roles as per documentation.
yum install rhel-system-roles
But when I click inside the GUI on Configure->Ansible->Roles I get the error:
no ansible roles were found in satellite.
I have also copied some roles from the system-roles folder to /etc/ansible/roles/ and also made a test_role folder there but still cannot import or see them inside the GUI.
I have restarted the server. Can this be why I do not have a host that is connected ok without errors inside hosts?
Thanks in advance.
Related
I have run into a slight snag and would appreciate some advice/help.
In my company we use Ansible Tower running ansible 2.9. Tower is being supported and run by a separate team. They manage all the collections and modules which are provided with v2.9. We use tower to create automation's mainly interacting with VMWare vCentre. During development there have been times where the 2.9 version of some modules have bugs or just dont have the functionality that the community modules have. Therefore till now we have been creating a library folder and a collections folder at the top level of our project and just adding modules which we need in there and its been working absolutely fine.
Recently however we have had the need to set up local environments to be able to make development more efficient. I have done this using vagrant and installed RHEL 8.4 with the below
ansible [core 2.13.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/vagrant/.ansible/collections:/usr/share/ansible/collections
executable location = /home/vagrant/.local/bin/ansible
python version = 3.10.5 (main, Jun 14 2022, 14:27:52) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.1.2
libyaml = True
Now everything has been working fine until I ran a project where I had something to do with vCenter. It threw me an error that stated that ansible could not resolve the module/action. I thought this was due to not having the modules installed so I used the ansible galaxy command to install community-vmware collection. This also worked fine. However because I have a folder called collections at my project level anything that is not part of ansible core and part of any collections, ansible looks in the project level collections folder not the one that ansible is pointing to. Due to this none of my roles are able to execute.
So my questions are:
Is it possible to have a collections folder at the project level and one at the system level and for ansible to look at both to find my module anywhere.
Is there anyway of getting the same modules that I have in ansible tower into my vagrant box so that we are developing with the same things as we would be if I was running this in tower only.
Apologies if I have missed anything out and if so please let me know and I will do my best to provide the info needed.
Thank you all in advance
I know that you can setup proxy in Ansible to provision behind corporate network:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html
like this:
environment:
http_proxy: http://proxy.example.com:8080
Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using:
package:
name: package-name
state: present
I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
This isn't really a question about Ansible, as all Ansible is doing is running the relevant package management system on the target host (i.e. yum, dnf or apt or whatever). So it is a question of what solution the specific package management tool provides, for this case.
There are a variety of solutions and for example in the Centos/RHEL world you can:
Create a basic mirror
Install a complete enterprise management system
There is another class of tool generally called an artefact repository. These started out life as tools to store binaries built from code, but have added a bunch of features to act as a proxy and cache packages from a wide variety of sources (OS Packages, PIP, NodeJS, Docker, etc). Two examples that have limited free offerings:
Nexus
Artifactory
They of course still need to collect those packages from a source, so at some point those are going to have to be downloaded to placed within these systems.
Like clockworknet pointed out this is more related to the RHEL package handling. Setting up local mirror somewhere inside the closed network can provide a solution in this situation. More info on "How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?": https://access.redhat.com/solutions/23016
My solution:
Install Sonatype Nexus 3
create one or more yum proxy repositories
https://help.sonatype.com/repomanager3/formats/yum-repositories
use Ansible to add these proxies via yum_repository
https://docs.ansible.com/ansible/latest/modules/yum_repository_module.html
yum_repository:
name: proxy-repo
description: internal proxy repo
baseurl: https://your-nexus.server/url-to-repo```
note: did that for APT and works fine, would expect the same for yum
I never hosted any website before may be thats why this task became so tough for me.I searched various codes for deployment but wasn't able to host my website.
i used python 3.6.4 and django 2.0.2 with mysql database for my website. It would be a great help if i get steps from scratch for deployment with my requirements.
Thanks in advance!
Below are the basic steps to host your django website on any linux based server.
1) Create requirements.txt file which will include all your pip packages.
On your local enviroment just do pip freeze. It will show you something as below. Include those package to your file.
Django==1.11.15
pkg-resources==0.0.0
pytz==2018.5
2) Create virtual env on your ec2 amazon instance. You can follow same step give on below website.
https://docs.python-guide.org/dev/virtualenvs/
3) Install you local packages to this virtual env.
4) If you have mysql as backend you can install mysql with below command
sudo apt-get install mysql*
Or you can use RDS (Amazon Relational Database Service)
5) Check if you django is able to connect to mysql using below command
python manage.py check
6) If above command work without error, You need to install two things.
1) Application server
2) Web server
7) You can use any Application server like uwsgi, gunicorn
https://uwsgi-docs.readthedocs.io/en/latest/
https://gunicorn.org/
8) Web server will be nginx
https://nginx.org/en/
9) For you static file you will need Bucket. You need to create bucket and host you static files their.
You can find help online to achieve above steps.
I am attempting setup of a dev environment using Virtualbox on OSX host running Ubuntu Server 16.10 on guest.
I am stuck on getting Samba to share the dev directory on the guest so that ultimately Netbeans can be used to edit the server files on OSX via the share directory.
This works fine on OSX to seperate physical Ubuntu machine.
From standard Samba config, at the end is
[testsharename]
path=/home/myusername/shared#note trailing slash required
#hosts deny=*
#hosts allow=192.168.0.210#ip of an allowed lan address
guest ok=yes
writeable=yes
The actual share is identified using Finder on OSX however on clicking on it there is an error that it cannot be found. Changing the share name reflects on Finder. The commented out lines are because I only really want a single Lan IP to access.
Finder error is that the operation can't be completed because the original item for "testshare" can't be found
Logs showed Can't mount Samba share (canonicalize_connect_path failed) therefore some research narrowed this down to a permissions issue hinted at by https://ubuntuforums.org/showthread.php?t=1439582
Moving the share out of the home directory into /var/www/ as I originally required (the home directory part was simply testing things) this, with 777 permissions on the share dir only showed it to work perfectly.
I certainly don't agree from the forum post that all path nodes leading to the share require permission changes however.
I am using this playbook to install a 3 node ScaleIO cluster on CentOS 7.
https://github.com/sperreault/ansible-scaleio
In the EMC documentation they specify that a CSV file needs to be uploaded to the IM to complete installation, I am not sure though how I can automate that part within this playbook. Has anyone got any practical experience of doing so?
this playbook is used to install ScaleIO manually, not by IM.
so you do not need to prepare a csv file