Create new EC2 keypair on AWS with Boto3 - amazon-ec2

The boto3 1.1.2 docs say that the create_key_pair command is supposed to return a dict containing the private key of the newly created keypair.
I am indeed using that version…
>>> import boto3
>>> boto3.__version__
'1.1.2'
…yet when I run create_key_pair I am instead returned a KeyPair object which does not appear to contain any information about the private key. The keypair does get created, it's just that I have no way of retrieving the private key because it is only ever available at the time of the keypair's creation. Older boto APIs apparently had a .save method on the KeyPair object to save the key to a file, but that too appears to have been removed from the API.
In boto3 1.1.2, how does one create a new EC2 keypair and retrieve its private key?

The private key is available in keypair['KeyMaterial']:
>>> import boto3
>>> ec2 = boto3.client('ec2')
>>> keypair = ec2.create_key_pair(KeyName='foo')
>>> keypair['KeyMaterial']
'-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCA...\n-----END RSA PRIVATE KEY-----'
References:
boto3 create_key_pair() documentation
boto3 EC2 migration guide

In the new versions of boto3 (I'm using 1.4.7) change this line:
keypair['KeyMaterial']
to
keypair.key_material

Add the feature to save to local keypair file
$ cat keypair.py
import boto3
keypair_name = "python_keypair"
ec2 = boto3.client('ec2')
keypair = ec2.create_key_pair(KeyName=keypair_name)
private_key_file=open(keypair_name,"w")
private_key_file.write(response['KeyMaterial'])
private_key_file.close
now you should get the private key locally
$ cat python_keypair.pem
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA14D9GAC7zVSRr3iHUyEaIF8ol5ccWBj9InVqYnF28l10EUCz
g5OLL5Ll6WiIYvlxhcRHM5d0os2Lg5SuKi0mTktYQ7QVD8RkdoEYIVrqgBir3VMf
8jG08JRhaJs4/OQk2+WAGecjcVx6joz9yXTRT3Maaec/4qNigfYMLpSsdAoZ0hrk
....
move it to ~/.ssh and change permission to 600

Related

How to download data with minio uri

When I run simple kubeflow pipeline on minikube like first pod's output is second pod's input, data seems to be saved in minio. But I did not intend to use it. So to check the output data in minino, I got access to http://localhost:9000/ . Then I reached to login page.
When I run kubectl get secrets to find secret information, I could not find any minio secrets. Also minioadmin and minioadmin for Access Key and Secret Key did not work. How can I fetch data from minio uri?
I define the pipeline like this;
import kfp
import kfp.components as comp
from kfp.components import load_component_from_file
example_component1_op = load_component_from_file("./pipelines/components/example_component1/example_component1.yaml")
example_component2_op = load_component_from_file("./pipelines/components/example_component2/example_component2.yaml")
#kfp.dsl.pipeline(name='example_pipeline_20220820')
def example_pipeline():
example_component1_task = example_component1_op(
input_1='/app/input.txt',
input_2=8,
)
example_component2_task = example_component2_op(
input_1=example_component1_task.outputs['output_1'],
input_2=5,
)
I found the Access Key and the Secret Key.
Access Key: minio
Secret Key: minio123
Ref
https://github.com/kubeflow/pipelines/blob/master/developer_guide.md

Deploy .sh file in ec2 using terraform

i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.

No environment configuration found. DefaultAzureCredential()

I am trying to use this python sample to authenticate a client with an Azure Service
# pip install azure-identity
from azure.identity import DefaultAzureCredential
# pip install azure-mgmt-compute
from azure.mgmt.compute import ComputeManagementClient
# pip install azure-mgmt-network
from azure.mgmt.network import NetworkManagementClient
# pip install azure-mgmt-resource
from azure.mgmt.resource import ResourceManagementClient
SUBSCRIPTION_ID = creds_obj['SUBSCRIPTION_ID']
# Create client
# For other authentication approaches, please see: https://pypi.org/project/azure-identity/
resource_client = ResourceManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
network_client = NetworkManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
compute_client = ComputeManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
I keep getting No environment configuration found.
The code sample is directly from the microsoft github: https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/resources/azure-mgmt-resource/azure/mgmt/resource/resources/_resource_management_client.py. Ideally I would like to manage this configuration using environment variables or a config file. Is there any way to do this?
When using Azure Identity client library for Python, DefaultAzureCredential attempts to authenticate via the following mechanisms in this order, stopping when one succeeds:
You could set Environment Variables to fix it.
from azure.identity import DefaultAzureCredential
credential=DefaultAzureCredential()
Or set the properties in config and use ClientSecretCredential to create credential.
from azure.identity import ClientSecretCredential
subscription_id = creds_obj["AZURE_SUBSCRIPTION_ID"]
tenant_id = creds_obj["AZURE_TENANT_ID"]
client_id = creds_obj["AZURE_CLIENT_ID"]
client_secret = creds_obj["AZURE_CLIENT_SECRET"]
credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
I was having somewhat similar trouble following this Azure key vault tutorial which brought me here.
The solution I found was overriding the default values in the DefaultAzureCredential() constructor.
https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python
For reasons people far smarter than me will be able to explain I found that even though I had credentials from the azure cli it was not using those and instead looking for environment_credentials, which I did not have. So it threw an exception.
Once I set the exclude_environment_credential argument to True it then looked instead for the managed_identity_credentials, again which I did not have.
Eventually when I force excluded all credentials other than those from the cli it worked ok for me
I hope this helps someone. Those with more experience please feel free to edit as you see fit

How do you use an HTTP/HTTPS proxy with boto3?

On the old boto library is was simple enough to use the proxy, proxy_port, proxy_user and proxy_pass parameters when you open a connection. However, I could not find any equivalent way of programmatically define the proxy parameters on boto3. :(
As of at least version 1.5.79, botocore accepts a proxies argument in the botocore config.
e.g.
import boto3
from botocore.config import Config
boto3.resource('s3', config=Config(proxies={'https': 'foo.bar:3128'}))
boto3 resource
https://boto3.readthedocs.io/en/latest/reference/core/session.html#boto3.session.Session.resource
botocore config
https://botocore.readthedocs.io/en/stable/reference/config.html#botocore.config.Config
If you user proxy server does not have a password
try the following:
import os
os.environ["HTTP_PROXY"] = "http://proxy.com:port"
os.environ["HTTPS_PROXY"] = "https://proxy.com:port"
if you user proxy server has a password
try the following:
import os
os.environ["HTTP_PROXY"] = "http://user:password#proxy.com:port"
os.environ["HTTPS_PROXY"] = "https://user:password#proxy.com:port"
Apart from altering the environment variable, I'll present what I found in the code.
Since boto3 uses botocore, I had a look through the source code:
https://github.com/boto/botocore/blob/66008c874ebfa9ee7530d944d274480347ac3432/botocore/endpoint.py#L265
From this link, we end up at:
def _get_proxies(self, url):
# We could also support getting proxies from a config file,
# but for now proxy support is taken from the environment.
return get_environ_proxies(url)
...which is called by proxies = self._get_proxies(final_endpoint_url) in the EndpointCreator class.
Long story short, if you're using python2 it will use the getproxies method from urllib2 and if you're using python3, it will use urllib3.
get_environ_proxies is expecting a dict containing {'http:' 'url'} (and I'm guessing https too).
You could always patch the code, but that is poor practice.
This is one of the rare occasions when I would recommend monkey-patching, at least until the Boto developers allow connection-specific proxy settings:
import botocore.endpoint
def _get_proxies(self, url):
return {'http': 'http://someproxy:1234/', 'https': 'https://someproxy:1234/'}
botocore.endpoint.EndpointCreator._get_proxies = _get_proxies
import boto3

https ssl password in node js 0.4

node 0.2.6 way:
var credentials = crypto.createCredentials({ "key": SSLKey, "cert": SSLCert, "ca": Ca, "password": SSLKeyPass })
var client = http.createClient(apiPort, host, true, credentials)
node 0.4 way:
var options = {
host: apiHost,
port: apiPort,
method: 'GET',
path: uri,
headers: {host: host},
key:SSLKey,
cert:SSLCert,
ca:Ca,
password:SSLKeyPass
}
var request = https.request(options, function (response) {
As you can see there is a password needed, I don't know where the password is supposed to go in node 0.4.
Where does SSLKeyPass go on node 0.4?
So even in the node 0.2.6 source code, the crypto.js module is not looking for a password property in the object you pass to createCredentials. Here's the createCredentials source from node 0.2.6. In version 0.4.8 there is still no mention of the word password in the crypto.js module. Did your 0.2.6 code really work?
As a general comment, use openssl to decrypt your private key, keep that secured on disk, and have your node code read that file. This seems to be the most commonly used option. The other options being A) have to manually type the passphrase to decrypt your private key whenever you launch your node server (pretty much nobody does this) or B) keep your cleartext passphrase on disk, which is not any different that just keeping the cleartext private key on disk, so AFAIK this is also a very uncommon solution to the problem of private key security.
You can decrypt your private key with the openssl command line like this:
openssl rsa -in your_encrypted_private.ekey -out your_private.key
openssl will prompt your for the passphrase interactively.
For the record, you can provide a passphrase when creating a Credentials object in Node.js. This section of Node.js documentation on the crypto module states that the passphrase option can be provided, for either the private key or PFX file. You do not have to keep your private key in clear text on disk somewhere for Node.

Resources