Can AWS CLI be installed using Python2.7? - windows

I installed AWS CLI from Python 2.7 using python -m pip install awscli. It seemed to install, but then when trying to run aws, I get 'aws' is not recognized as an internal or external command.
The documentation states that I should add to PATH this:
%USERPROFILE%\AppData\Local\Programs\Python\Python36\Scripts
But this is for Python3. Where is it installed for Python2? There is nothing in %USERPROFILE%\AppData\Local\Programs\ (I checked). And does installation work for Python2 or only for Python3?

After lots of searching, the file was located at c:\Python27\Scripts\aws.cmd. But it was aws.cmd, not aws.exe. So to make aws work, you need to add it to the PATH:
set PATH=%PATH%;c:\Python27\Scripts
After that it works:
c:\Python27>aws --version
File association not found for extension .py
aws-cli/1.11.148 Python/2.7.14rc1 Windows/10 botocore/1.7.6
Although there is still this weird File association not found for extension .py error.
Edit: From #zwer's comment about "File association not found for extension .py", you need to execute this from an administrator cmd prompt:
assoc .py=Python.File
ftype Python.File=c:\Python27\python.exe "%1" %*

The best approach to get this done is
Install pip
pip Install awscli
aws configure
keys and identification keys access parameters
To Install PIP:
need to update YUM Release version and then install python-pip
#yum install epel-release
#yum install python-pip
Install AWSCLI:
#pip install awscli
Configure AWSCLI:
#aws configure
aws_access_key_id=<########>
aws_secret_access_key=<####################>
Default Region[None]: region=us-west-2
format[none]: json
you can find these configuration parameters later in file::
~/.ssh/aws/credentials

Related

AWSCLI installation in powershell

I have install chcolatey but to install awscli i have given
command as:
choco install awscli ,and
out put is:
Cannot find file at '../choco.exe' (C:\ProgramData\chocolatey\choco.exe). This usually indicates a missing or moved file.

In aws ec2, sudo: apt-get: command not found error

I have created an instance in amazon Linux. and I want to install python-dev. for this I was using
sudo apt-get install python-dev or any other package. but it throws me the command not found error.
instead of apt-get, I was used yum command but that is also not working.
I have created a flask application and I am using FileZilla and putty. I am running application (python files) using putty. I have install pycrypto but then also it is showing no module name. so somewhere I learned that I have install package python-dev. for that purpose I was using apt-get command.
Amazon linux is based on redhat distribution. So you should yum install [package name] instead of apt-get'. the commandapt-getcommand is used indebian` based distribution.
You can use yum search [package name] to search for packages by name.
hope this helps.

how to uninstall packages installed with pip3 and it's dependencies?

I was installing apache-airflow in my centOS 8. Only pip3 works in my environment. I did something with the environment variable which created two config files for airflow. I am not able to find another config file to delete it. So, I was trying to uninstall airflow. I used
pip3 uninstall apache-airflow
It removed the package but still, the other dependent files that were installed are there. I googled and found pip-autoremove but it doesn't work for pip3.
I am trying to find a way to clean install airflow again by removing all the old files, dependent packages. Is there a way to use autoremove in pip3 or are there any other alternatives for my issue?
Maybe if you make a new Virtual Environment and then install your package inside it.
python3 -m venv /path/to/new/virtual/environment
source <venv>/bin/activate.csh
pip3 install apache-airflow
pip3 freeze > dependencies.txt
Then make a pip freeze and now you can delete all installed packages (which are apache-airflow and its dependencies) in you working environment. So you can go to your working environment and just delete them:
pip3 uninstall -r <path>/dependencies.txt
Delete all the files under $AIRFLOW_HOME (default path: ~/airflow). Airflow will look for config file at $AIRFLOW_HOME/airflow.cfg. So reinstall airflow, set $AIRFLOW_HOME to the place where you want to have all your config files and DAGs as mentioned in https://airflow.apache.org/start.html.

How do I install pip modules on google compute engine?

I am trying to run some python script using ssh to log into the google compute engine but all the installed pip modules are not found as I do not have permission to the .cache/pip folder in my user is there a correct way to do this?
You should be running this with the root user.
Also, if you need pip inside your GCP Instance, you can use the following commands:
sudo curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
sudo python get-pip.py
[Source]
Use:
sudo apt-get install python3-pip
sudo runs this command as an administrator
apt-get is the standard package manager used on Debian Linux distributions
python3-pip is the package name for pip3
Once installed, you can install PIP modules with:
pip3 install MODULE_NAME
for example:
pip3 install tensorflow
I'm not entirely sure there is one correct way to do this, but an easy way would be to use the conda python package manager.
The lighter version of it is miniconda. You can get a minimal python installation with pip preinstalled, and virtual environments capability if you need. Assuming you are running on linux and want python 3, you'll have to run
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
and then install conda with
bash Miniconda3-latest-Linux-x86_64.sh
At the end of this process you should have a minimal python installation (that includes pip) and you'll be able to install packages with pip as you are used to.
You might want to install some basic libraries first -
sudo apt-get install bzip2 libxml2-dev
Then install miniconda as given by #teoguso and restart your shell
source ~/.bashrc
You can then use conda or pip to install your packages

Errors while trying to use pip on OpenShift

I am getting the following error both when using git and when logging in via rhc and try to install the requirements file:
The directory '/var/lib/openshift/***/.cache/pip/http' or its parent
directory is not owned by the current user and the cache has been disabled.
Please check the permissions and owner of that directory.
If executing pip with sudo, you may want sudo's -H flag.
You are using pip version 7.1.0, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
I am not trying to install with sudo.
What I am trying to do is:
Log into via rhc and and ssh: rhc ssh 'app'
activate venv: source $OPENSHIFT_PYTHON_DIR/virtenv/bin/activate
pip install -r "$OPENSHIFT_REPO_DIR" requirements.txt
Note that $OPENSHIFT_PYTHON_DIR and $OPENSHIFT_REPO_DIR are the environment variables given by OpenShift to access the relevant folders.
Any ideas? I am on a Python 2.7 cartridge.
Openshift will automatically install your dependencies based on a requirements.txt file. So you shouldn't ssh into your app and do that yourself.
You can find more information about it on the developer center pages. [1]

Resources