Aws Ruby SDK credentials from file - ruby

I would like to store my credentials in ~/.aws/credentials and not in environmental variables, but I am struggling.
To load the credentials I use (from here)
credentials = Aws::SharedCredentials.new({region: 'myregion', profile_name: 'myprofile'})
My ~/.aws/credentials is
[myprofile]
AWS_ACCESS_KEY = XXXXXXXXXXXXXXXXXXX
AWS_SECRET_KEY = YYYYYYYYYYYYYYYYYYYYYYYYYYY
My ~/.aws/config is
[myprofile]
output = json
region = myregion
I then define a resource with
aws = Aws::EC2::Resource.new(region: 'eu....', credentials: credentials)
but if I try for example
aws.instances.first
I get the error Error: #<Aws::Errors::MissingCredentialsError: unable to sign request without credentials set>
Everything works if I hard code the keys

According to the source code aws loads credentials automatically only from ENV.
You can create credentials with custom attributes.
credentials = Aws::Credentials.new(AWS_ACCESS_KEY, AWS_SECRET_KEY)
aws = Aws::EC2::Resource.new(region: 'eu-central-1', credentials: credentials)
In your specific case, it seems there is no way to pass custom credentials to SharedCredentials.

If you just do
credentials = Aws::SharedCredentials.new()
it loads the default profile. You should be able to load myprofile by passing in :profile_name as an option.
I don't know if you can also override the region though. You might want to try to loose that option, see how it works.

Related

How to download data with minio uri

When I run simple kubeflow pipeline on minikube like first pod's output is second pod's input, data seems to be saved in minio. But I did not intend to use it. So to check the output data in minino, I got access to http://localhost:9000/ . Then I reached to login page.
When I run kubectl get secrets to find secret information, I could not find any minio secrets. Also minioadmin and minioadmin for Access Key and Secret Key did not work. How can I fetch data from minio uri?
I define the pipeline like this;
import kfp
import kfp.components as comp
from kfp.components import load_component_from_file
example_component1_op = load_component_from_file("./pipelines/components/example_component1/example_component1.yaml")
example_component2_op = load_component_from_file("./pipelines/components/example_component2/example_component2.yaml")
#kfp.dsl.pipeline(name='example_pipeline_20220820')
def example_pipeline():
example_component1_task = example_component1_op(
input_1='/app/input.txt',
input_2=8,
)
example_component2_task = example_component2_op(
input_1=example_component1_task.outputs['output_1'],
input_2=5,
)
I found the Access Key and the Secret Key.
Access Key: minio
Secret Key: minio123
Ref
https://github.com/kubeflow/pipelines/blob/master/developer_guide.md

Deploy .sh file in ec2 using terraform

i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.

Handle credentials in a kubeflow's ContainerOp

I'm trying to run a kubeflow pipeline setup and I have several environements (dev, staging, prod).
In my pipeline I'm using kfp.components.func_to_container_op to get a pipeline task instance (ContainerOp), and then execute it with the appropriate arguments that allows it to integrate with my s3 bucket:
from utils.test import test
test_op = comp.func_to_container_op(test, base_image='my_image')
read_data_task = read_data_op(
bucket,
aws_key,
aws_pass,
)
arguments = {
'bucket': 's3',
'aws_key': 'key',
'aws_pass': 'pass',
}
kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments)
Each one of the environments is using different credentials to connect to it and those credentials are being passed in the function:
def test(s3_bucket: str, aws_key: str, aws_pass: str):
....
s3_client = boto3.client('s3', aws_access_key_id=aws_key, aws_secret_access_key=aws_pass)
s3_client.upload_file(from_filename, bucket_name, to_filename)
so for each environment I need to update the arguments to contain the correct credentials and it makes it very hard to maintain since each time that I want to update from dev to stg to prod I can't simply copy the code.
My question is what is the best approach to pass those credentials?
Ideally you should push any env-specific configurations as close to the cluster as possible (as far away from components).
You can create Kubernetes secret in each environemnt with different creadentials. Then use that AWS secret in each task:
from kfp import aws
def my_pipeline():
...
conf = kfp.dsl.get_pipeline_conf()
conf.add_op_transformer(aws.use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
Maybe boto3 can auto-load the credentials using the secret files and the environment variables.
At least all GCP libraries and utilities do that with GCP credentials.
P.S. It's better to create issues in the official repo: https://github.com/kubeflow/pipelines/issues

AWS CDK Cross Account Lambda Deployment Permission Issue

I followed the following tutorial to create a Lambda deploy pipeline using CDK. When I try to keep everything in the same account it works well.
https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
But my scenario is slightly different from the example because it involves two AWS accounts instead one. I maintain application source code and pipeline
in the OPS account and this pipeline will deploy the Lambda application to the UAT account.
OPS Account (12345678) - CodeCommit repo & CodePipeline
UAT Account (87654321) - Lambda application
As per the aws following aws documentation (Cross-account actions section) I made the following changes to source code.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html
Lambda stack expose deploy action role as follows
export class LambdaStack extends cdk.Stack {
public readonly deployActionRole: iam.Role;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
...
this.deployActionRole = new iam.Role(this, 'ActionRole', {
assumedBy: new iam.AccountPrincipal('12345678'), //pipeline account
// the role has to have a physical name set
roleName: 'DeployActionRole',
});
}
}
In the pipeline stack,
new codePipeline.Pipeline(this, 'MicroServicePipeline', {
pipelineName: 'MicroServicePipeline',
stages: [
{
stageName: 'Deploy',
actions: [
new codePipelineAction.CloudFormationCreateUpdateStackAction({
role: props.deployActionRole,
....
})
]
}
]
});
Following is how I initiate stacks
const app = new cdk.App();
const opsEnv: cdk.Environment = {account: '12345678', region: 'ap-southeast-2'};
const uatEnv: cdk.Environment = {account: '87654321', region: 'ap-southeast-2'};
const lambdaStack = new LambdaStack(app, 'LambdaStack', {env: uatEnv});
const lambdaCode = lambdaStack.lambdaCode;
const deployActionRole = lambdaStack.deployActionRole;
new MicroServicePipelineStack(app, 'MicroServicePipelineStack', {
env: opsEnv,
stackName: 'MicroServicePipelineStack',
lambdaCode,
deployActionRole
});
app.synth();
AWS credentials profiles looks liks
[profile uatadmin]
role_arn=arn:aws:iam::87654321:role/PigletUatAdminRole
source_profile=opsadmin
region=ap-southeast-2
when I run cdk diff or deploy I get an error saying,
➜ infra git:(master) ✗ cdk diff MicroServicePipelineStack --profile uatadmin
Including dependency stacks: LambdaStack
Stack LambdaStack
Need to perform AWS calls for account 87654321, but no credentials have been configured.
What have I done wrong here? Is it my CDK code or is it the way I have configured my AWS profile?
Thanks,
Kasun
The problem is with your AWS CLI configuration. You cannot use the CDK CLI natively to deploy resources in two separate accounts with one CLI command. There is a recent blog post on how to tell CDK which credentials to use, depending on the stack environment parameter:
https://aws.amazon.com/blogs/devops/cdk-credential-plugin/
The way we use it is to deploy stacks into separate accounts with multiple CLI commands specifying the required profile. All parameters that need to be exchanged (such as the location of your lambdaCode) is passed via e.g. environment variables.
Just try to use using the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Or
~/.aws/credentials
[default]
aws_access_key_id=****
aws_secret_access_key=****
~/.aws/config
[default]
region=us-west-2
output=json
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
It works for me.
I'm using cdk version 1.57.0
The issue is in the fact that you have resources that exist in multiple accounts and hence there are different credentials required to create those resources. However, CDK does not understand natively how to get credentials for those different accounts or when to swap between the different credentials. One way to fix this is to use cdk-assume-role-credential-plugin, which will allow you to use a single CDK deploy command to deploy to many different accounts.
I wrote a detailed tutorial here: https://johntipper.org/aws-cdk-cross-account-deployments-with-cdk-pipelines-and-cdk-assume-role-credential-plugin/

Firestore from Ruby - Could not load the default credentials

I am trying to get started with Google Firestore (FireBase) in Ruby and not really sure what to do about how to load credentials for server communication.
I am running this code from a test
it 'do something with firestore', focus: true do
firestore = Google::Cloud::Firestore.new(project_id: 'jg-jai-dev')
end
and get the following error
RuntimeError:
Could not load the default credentials. Browse to
https://developers.google.com/accounts/docs/application-default-credentials
for more information
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/googleauth-0.6.2/lib/googleauth/application_default.rb:61:in `get_application_default'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/googleauth-0.6.2/lib/googleauth/credentials.rb:132:in `from_application_default'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/googleauth-0.6.2/lib/googleauth/credentials.rb:90:in `default'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/google-cloud-firestore-0.21.0/lib/google/cloud/firestore.rb:559:in `default_credentials'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/google-cloud-firestore-0.21.0/lib/google/cloud/firestore.rb:507:in `new'
# ./spec/services/export/firestore_job_export_spec.rb:220:in `block (3 levels) in <top (required)>'
When I checked the documentation, it seems that I need to have some sort of credentials file in a JSON file but I am not sure where I find this file, I cannot see it in https://console.firebase.google.com
it 'where do I get the keyfile so that I can use Server authentication', focus: true do
firestore = Google::Cloud::Firestore.new(project_id: 'jg-jai-dev', credentials: "keyfile.json")
end
Where do you actually get the KeyFile.json?
According to source code you have several options to create your credentials.
In your case your code should work. Try to put your json into keyfile.json in the same folder as a spec with credentials: 'keyfile.json'.
Another option is to create credentials object by yourself:
creds = Google::Auth::Credentials.new private_key: 2048
Google::Cloud::Firestore.new(project_id: 'jg-jai-dev', credentials: creds)
I had to install gcloud and generate a credentials file. The steps I took can be found in quickstart-linux and authentication/getting-started but I have also listed them here.
Download and install gcloud
See: https://cloud.google.com/sdk/docs/quickstart-linux
Run gcloud
gcloud init
Select a default cloud project (not sure if this is needed)
At this stage I had the tooling needed to setup service account and create keyfile.json.
I did need to exit the terminal and run it again before moving onto step 4.
Set up a service account using the command line
gcloud iam service-accounts create [NAME]
gcloud iam service-accounts create my-service-account
Assign specific project permissions
gcloud projects add-iam-policy-binding [PROJECT_ID] --member "serviceAccount:[NAME]#[PROJECT_ID].iam.gserviceaccount.com" --role "roles/owner"
gcloud projects add-iam-policy-binding jg-jai-dev --member
"serviceAccount:my-service-account#cool-project.iam.gserviceaccount.com"
--role "roles/owner"
Generate Key
gcloud iam service-accounts keys create google-cloud-key.json
--iam-account my-service-account#cool-project.iam.gserviceaccount.com
Alter ruby code
Sample Code inside a spec test
it 'do something with firestore', focus: true do
firestore = Google::Cloud::Firestore.new(project_id: 'jg-jai-dev', credentials: "./google-cloud-key.json")
end

Resources