Can I list all available index servers with pip? - pip

I have added an index server in my ~/.pypirc file as:
[distutils]
index-servers = example
[example]
repository: https://example.com/simple/
username: someplaintextusername
password: someplaintextpw
However, I can't install a package which definitely is on the example index server. Now I want to check if pip actually notices that server in the pypirc file.
Can I make pip list all available index servers?
edit: For the problem I'm trying to solve, it seems as if ~/config/pip/pip.config is the file I should edit. But my question is still the same.

pip's own config list command should get you at least some of this info:
path/to/pythonX.Y -m pip config list

Related

How do I run ansible-galaxy installations using AWX?

So I've read over the other articles that are similar to this question and while they provided an accepted answer, I wasn't able to quite get there. I am running AWX on Fedora 35. I have installed it using the following guide.
https://computingforgeeks.com/install-and-configure-ansible-awx-on-centos/
After I finished this guy I also did
yum install Ansible
After I did what was said to do on the juniper website which is
ansible-galaxy install collection juniper.device
sudo ansible-galaxy install Juniper.junos
When I attempt to run my template I get the following.
The other article question/answer I read earlier stated (I also went to the Ansible site) that I need to create a requirements.yml file. I'm not sure where my issue is coming into play but I can't get it to work. Either I don't know where the requirements file is supposed to go. I put it in /var/lib/awx/projects/_8__getconfigs/collections and /var/lib/awx/projects/_8__getconfigs/roles
Inside of the files is very simple (and probably wrong).
Roles
---
roles:
- name: Juniper.junos
Collections
---
collections:
- name: juniper.devices
Lastly I was talking with someone earlier and they mentioned privileges which could make sense. There is no account for awx on my system. If anyone could help me it would be greatly appreciated. Thank you in advance!
You can do that using Execution Environments in AWX
1. Create a Dockerfile from awx-ee image containing the collections:
FROM quay.io/ansible/awx-ee:latest
RUN ansible-galaxy collection install gluster.gluster \
&& ansible-galaxy collection install community.general
Build the Image: docker build -t $ImageName .
Log in to your Docker repository: docker login -u $DockerHubUser
Tag the image: docker image tag $ImageName $DockerHubUser/$ImageName:latest
Push the image to Hub: docker image push $DockerHubUser/$ImageName:latest
2. Add Execution Environment to AWX:
The image location, including the container registry, image name, and version tag
3. That's it:
I've tested that already on a fresh AWX instance, where there is no collections installed.
You don't have to refer to the collection in a requirements.yml file
Whenever a new Galaxy Collection is needed, it should be added to the Dockerfile and pushed to Hub.
You can even install normal Linux packages in the docker image if needed.

Upgrade Ansible Tower - Minor upgrade

Anyone got a proper instruction set to upgrade Ansible Tower 3.4 to 3.6 ?
(Ansible 2.5, Database - postgres 9.6)
Found Ansible Doc but not in details.
Thanks
EDIT: The original question pertained to upgrading AWX. It's been edited and now pertains to upgrading Ansible Tower. My answer below only applies to upgrading AWX.
If you used the docker-compose installation method and pointed postgres_data_dir to a persistent directory on the host, upgrading AWX is straightforward. I deployed AWX 2.0.0 in 2018 and have upgraded it to every subsequent release (currently running 9.1.0) without issue. Below is my upgrade method which preserves all data including secrets between upgrades and does not rely on using the tower cli / awx cli tool.
AWX path assumptions:
Existing installation: /opt/awx
New release: /tmp/awx
AWX inventory file assumptions:
use_docker_compose=true
postgres_data_dir=/opt/postgres
docker_compose_dir=/var/lib/awx
Manual upgrade process:
Backup your AWX host before continuing! Consider backing up your postgres database as well.
Download the new release of AWX and unpack it to /tmp/awx
Ensure that the patch package is installed on the host.
Create a patch file containing the differences between the new and
existing inventory files:
diff -u /tmp/awx/installer/inventory /opt/awx/installer/inventory > /tmp/awx_inv_patch
Patch the new inventory file with the differences:
patch /tmp/awx/installer/inventory < /tmp/awx_inv_patch
Verify that the files now match:
diff -s /tmp/awx/installer/inventory /opt/awx/installer/inventory
Copy the new release directory over the existing one:
cp -Rp /tmp/awx/* /opt/awx/
Edit /var/lib/awx/docker-compose.yml and change the version numbers
after image: ansible/awx_web: and image: ansible/awx_task: to match the
new version of AWX that you're upgrading to.
Stop the current AWX containers:
cd /var/lib/awx
docker-compose stop
Run the installer:
cd /opt/awx/inventory
ansible-playbook -i inventory install.yml
AWX starts the upgrade process, which usually completes within a couple minutes. I'll typically monitor the upgrade progress with docker logs -f awx_web until I see RESULT 2 / OKREADY appear.
If everything is working as intended, I shut the containers down, pull and then recreate them using docker-compose:
cd /var/lib/awx
docker-compose stop
docker-compose pull && docker-compose up --force-recreate -d
If everything is still working as intended, I delete /tmp/awx and /tmp/awx_inv_patch.
Updgrades in AWX are not supported by ansible/redhat. Only the commercial Tower Licence allows to access scripts and procedures to do this.
From the awx project FAQ
Q: Can I upgrade from one version of AWX to another?
A: Direct in-place upgrades between AWX versions are not supported. It is possible to migrate data between different versions of AWX using the tower-cli tool. To migrate between different instances of AWX, please follow the instructions at https://github.com/ansible/awx/blob/devel/DATA_MIGRATION.md.
The reference link on github AWX project will teach you how to export your current data with tower-cli and reimport it in the new version you install. Note that all credentials are exported with blank secrets so you will have to update them with the passwords/secrets once imported.

Using Ansible for ScaleIO provisioning

I am using this playbook to install a 3 node ScaleIO cluster on CentOS 7.
https://github.com/sperreault/ansible-scaleio
In the EMC documentation they specify that a CSV file needs to be uploaded to the IM to complete installation, I am not sure though how I can automate that part within this playbook. Has anyone got any practical experience of doing so?
this playbook is used to install ScaleIO manually, not by IM.
so you do not need to prepare a csv file

Cloudera CDH4 install fails using Yum

I am trying to install the datanode and it gives the error "metadata file does not match checksum"
I am behind a proxy
I have tried everything- yum clear all, yum clear metadata. I also edited the yum conf and disabled caching.
In addition, i also manually deleted the cache directory. Nothing works. Nothing. Please help.
On another machine, i was able to get the name node successfully installed
**[root#bi ~]# export http_proxy= myproxy**
**[root#bi ~]# sudo yum install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode**
http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum
Trying other mirror.
http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum
Trying other mirror.
Error: failure: repodata/primary.xml.gz from cloudera-cdh4: [Errno 256] No more mirrors to try
I had the same problem, there seams to be a problem with proxy. Try an other proxy or take a look at your configures there.
cloudera manager creates a .repo before installing, and if there are any conflicts it causes that error.
To avoid such conflicts,
1)Create a /etc/yum.repos.d/cloudera-manager.repo file using any stable version of cloudera manager.(5.2.1 was the version when I did this)
My cloudera-manager.repo file looked like this:
[cloudera-manager]
name = Cloudera Manager, Version 5.2.1
baseurl = http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.2.1/
gpgkey = http://archive.cloudera.com/redhat/cdh/RPM-GPG-KEY-cloudera
gpgcheck = 1
2) Now run the following command to make the installer use the local repo file.
./cloudera-manager-installer.bin --skip_repo_package=1

Can't reindex sunspot. "Solr Response: Bad Request"

I'm trying to use sunspot in production with tomcat-solr, in ubuntu
10.10
I followed these steps:
sudo apt-get install openjdk-6-jdk
sudo apt-get install solr-tomcat
sudo service tomcat6 start
Then I updated my sunspot.yml to point the production / staging
environment to the port :8080.
But when I try to run rake sunspot:solr:reindex , it gives me this
message. "Solr Response: Bad Request"
It's been four days and I still can't figure ou what is
wrong =/ I couldn't find the tomcat/solr logs to get more info on
what's bad in my request.
Can someone help me?
In your case, I am willing to bet that you haven't updated your configuration files with Sunspot's default schema.xml and solrconfig.xml. Log files will likely be in /var/log/tomcat6 and may complain about an unknown field "type".
I am not exactly sure where Ubuntu's solr-tomcat package creates the Solr home, but /usr/share/solr is a good place to check. You should copy Sunspot configuration files from solr/conf into Solr's own configuration directory and restart Solr to update the config files.
See also my answer to sunspot solr undefined field type.

Resources