I am using "MacOS High Sierra".
I installed the aws cli tool a long time ago, don't remember how I installed it.
The installation is a little unusual.
I can run aws from any folder, this is working
$ aws --version
aws-cli/1.11.121 Python/2.7.13 Darwin/17.4.0 botocore/1.7.12
However running
$ which aws
this returns nothing.
I thought it might be an alias, but running
$ alias | grep aws
This also returns nothing.
Its not installed with homebrew either
$ brew list | grep aws
The reason why this is a problem, because there have now been a few cli programs I have ran (Including "AWS Sam" and a build script from my work) which are complaining because aws is not in the path.
I would much rather have a "regular installation" of the aws cli, where I put the executable in some bin folder and then put it in the environment path.
But instead its using some "magic" which I am unfamiliar with. And not even AWS owns tools ("AWS Sam") seem to like the way its installed.
Any advice would be appreciated.
I solved the problem by running
$ pip uninstall awscli
$ brew upgrade
$ brew install awscli
Now I get this result
$ which aws
/usr/local/bin/aws
"AWS Sam" and the other build script I use at work are now working.
Related
I'm trying to install nvm / node on an AMZ Linux 2 EC2 Instance using the following userData script:
#!/bin/bash
curl https://raw.githubusercontent.com/creationix/nvm/v0.39.1/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 16.17.0
nvm use 16.17.0
However, when I SSH into the Instance, neither nvm nor node are installed. If I run the commands manually while SSH'd into the Instance, they work fine.
Anyone have any thoughts on why the installs don't work in the userData script? Thanks for any thoughts!
Although I still haven't figure out why the userData Script above doesn't work (it used to, so I'm not sure what changed), I was able to install Node using the following as userData, so I thought I'd share.
#!/usr/bin/env bash
yum update -y
curl -sL https://rpm.nodesource.com/setup_10.x | bash -
yum install -y nodejs git
I'd still be interested to know why the first script no longer works, if anyone has any idea.
Cheers!
Rob
I have so far tried installed awscli via the commandline and via the interactive installer , using brew and pip however I cannot use the aws command due to path configuration
which aws
/usr/local/bin
But when I try
aws --version I get
/Library/Frameworks/Python.framework/Versions/2.7/bin/aws: No such file or directory
How can I have aws command run from the correct location?
I also tried running aws command from /usr/local/bin but have the same error
I have tried this in a new shell and this is aws cli v2
also tried this
type aws
aws is hashed (/Library/Frameworks/Python.framework/Versions/2.7/bin/aws)
Same issue:
$ /usr/local/bin/aws --version
aws-cli/2.0.23 Python/3.7.4 Darwin/19.4.0 botocore/2.0.0dev27
Solution:
export PATH=$PATH:/usr/local/bin/aws
If having issues updating/upgrading the awscliv1 to v2.
Take the following step:
rm -rf /bin/aws
curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install -i /usr/local/aws -b /bin
export the PATH to the bin/aws previously remove
export PATH=$PATH:/usr/local/bin/aws
I am trying to install a python function using M2Crypto in AWS Lambda.
I spun up an EC2 instance with the Lambda AMI image, installed M2Crypto into a virtualenv, and was able to get my function working on EC2.
Then I zipped up the site-package and uploaded to Lambda. I got this error
Unable to import module 'epd_M2Crypto':
/var/task/M2Crypto/_m2crypto.cpython-36m-x86_64-linux-gnu.so: symbol
sk_deep_copy, version libcrypto.so.10 not defined in file
libcrypto.so.10 with link time reference
There are similar questions and hints here and here. I tried uploading the offending lib (libcrypto.so.10) in the zip file, but still get the same error. I am assuming the error means that the EC2 version of libcrypto.so.10 (used to install M2Crypto) is different than the version on Lambda (that I trying to run with), so M2Crypto complains.
If I look at the versions of openssl they are different:
OpenSSL 1.0.0-fips 29 Mar 2010 (lambda version)
OpenSSL 1.0.2k-fips 26 Jan 2017 (ec2 version)
I don't think the answer is to downgrade openssl on ec2 as the 1.0.0 version is obsolete (AWS applies security patches but the version still shows as 1.0.0). (Also the yum doesn't have versions this old)
Here's the steps i used on the EC2 instance to get it working on EC2:
$ sudo yum -y update
$ sudo yum -y install python36
$ sudo yum -y install python-virtualenv
$ sudo yum -y groupinstall "Development Tools"
$ sudo yum -y install python36-devel.x86_64
$ sudo yum -y install openssl-devel.x86_64
$ mkdir ~/forlambda
$ cd ~/forlambda
$ virtualenv -p python3 venv
$ source venv/bin/activate
$ cd ~
$ pip install M2Crypto -t ~/forlambda/venv/lib/python3.6/site-packages/
$ cd ~/forlambda/venv/lib/python3.6/site-packages/
$ (create python function that uses M2Crypto)
$ zip -r9 ~/forlambda/archive.zip .
Then added to the zip file
/usr/bin/openssl
/usr/lib64/libcrypto.so.10
/usr/lib64/libssl.so.10
And uploaded to Lambda, which is where I am now stuck.
Do I need to do something to get Lambda to use the version of libcrypto.so.10 that I have included in the uploaded zip?
My function:
"""
Wrapper for M2Crypto
https://github.com/mcepl/M2Crypto
https://pypi.org/project/M2Crypto/
"""
from __future__ import print_function
from M2Crypto import RSA
import base64
import json
def decrypt_string(string_b64):
rsa = RSA.load_key('private_key.pem')
string_encrypted = base64.b64decode(string_b64)
bytes = rsa.private_decrypt(string_encrypted, 1)
string_plaintext = bytes.decode("utf-8")
response = {
's': string_plaintext,
'status': "OK",
'statuscode': 200
};
return response
def lambda_handler(event, context):
response = ""
action = event['action']
if action == "decrypt":
string_b64 = event['s']
response = decrypt_string(string_b64)
return response
AWS support provided a resolution, upgrading to use Python 3.7 where the issue is resolved:
Our internal team has confirmed that the issue is with Lambda's Python
runtime. In a few rare cases, when the Lambda function is being
initialised, Lambda is unable to link against the correct OpenSSL
libraries - instead linking against Lambda's own in-built OpenSSL
binaries.
The team suggests trying this out in the Python3.7 environment where
this behaviour has been fixed. Also, python3.7 is compiled with the
newer openssl 1.0.2 and you should not have to include the binaries in
the Lambda package. ... still had to include the OpenSSL binaries in
the package and could not get it working with the default libraries.
First I ran this command on the EC2 instance to make sure I had included the correct .so file in my .zip:
$ ldd -v _m2crypto.cpython-36m-x86_64-linux-gnu.so
The output of the ldd command (edited for brevity):
libssl.so.10 => /lib64/libssl.so.10 (0x00007fd5f1892000)
libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fd5f1433000)
Based on the output above, I included /lib64/libcrypto.so.10 in my .zip.
Also (at the suggestion of AWS Support), on the Lambda console, under 'Environment variables', I added a key 'LD_LIBRARY_PATH' with value '/var/task'.
I'm not sure if I needed both those changes to fix my problem, but it works right now and after three days of troubleshooting I am afraid to touch it to see if it was one or the other that made it work.
It is perhaps too brutal, but would it be possible to use LD_PRELOAD to force using your preferred version of OpenSSL library?
AWS lambda runs code on an old version of amazon linux (amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2) as mentioned in the official documentation
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
So to run a code that depends on shared libraries, it needs to be compiled in the same environment so it can link correctly.
What I usually do in such cases is that I create virtualenv using docker container. The virtualenv can than be packaged with lambda code.
Please note that if you need install anything using yum (in the docker container), you must use same release server as the amazon linux version:
yum --releasever=2017.03 install ...
virtualenv can be built using an EC2 instance as well instead of docker container (though, I find docker method easier). Just make sure that the AMI used for EC2 is same as the one used by lambda.
I recently ran the following command to install the Amazon Elastic Beanstalk Command Line Interface (EB CLI). I would now like to remove it from my Windows 10 machine.
C:\Users\Cale>pip install --upgrade --user awsebcli
What is the best command to run to ensure that its fully removed from my machine?
I was able to uninstall using the following command:
C:\Users\Cale>pip uninstall awsebcli
I was uncertain how to do the uninstall since I specified --user in the original install command. This stackoverflow article helped me understand that the --user option would not matter during the uninstall process.
How to uninstall a package installed with pip install --user
For me, the awsebcli is not present in the pip list command that references the $PATH. I get this error:
Skipping awsebcli as it is not installed.
Apparently, it's on the pip executable(s) in this location (Windows, PowerShell format):
$env:userprofile\.ebcli-virtual-env\Scripts\
The uninstall command worked properly using one of those executables.
After that, it it seems that deleting the .ebcli-virtual-env will remove it fully from the machine: How do I remove/delete a virtualenv? (disclaimer: I'm not a pythonista :) )
I am using an Ubuntu 12.04. I have downloaded the EC2 CLI tools from the Amazon website. The following are the steps that I have done..
Unzipped the file and put it in a directory.
Set the Java class path properly (My Tomcat is working).
Set the EC2 home path, after that set the EC2 Home and bin path in bashrc
Set the access and secret key in bashrc.
When I am trying to trying to start an instance or do anything for that matter from the terminal, I am getting the error
Required option '-K, --private-key KEY' missing (-h for usage)
Could someone please help me with this?
Posting this so it might be helpful for others. The problem was happening because when I installed Ubuntu I had installed the ec2-tools using the apt-get from terminal.
This version of ec2 which Ubuntu has is an outdated version (it was last updated in 2011).
When I found this out, I removed it. And reconfigured the path to the current version of ec2 cli tools I had downloaded and it worked!!! :)
The way to install newer versions of the ec2-api-tools, as suggested by https://help.ubuntu.com/community/EC2StartersGuide, is to simply add the aws-tools PPA:
sudo apt-add-repository ppa:awstools-dev/awstools
sudo apt-get upgrade
and then a simple apt-get install ec2-api-tools will install the correct version. :)