Firestore from Ruby - Could not load the default credentials - ruby

I am trying to get started with Google Firestore (FireBase) in Ruby and not really sure what to do about how to load credentials for server communication.
I am running this code from a test
it 'do something with firestore', focus: true do
firestore = Google::Cloud::Firestore.new(project_id: 'jg-jai-dev')
end
and get the following error
RuntimeError:
Could not load the default credentials. Browse to
https://developers.google.com/accounts/docs/application-default-credentials
for more information
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/googleauth-0.6.2/lib/googleauth/application_default.rb:61:in `get_application_default'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/googleauth-0.6.2/lib/googleauth/credentials.rb:132:in `from_application_default'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/googleauth-0.6.2/lib/googleauth/credentials.rb:90:in `default'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/google-cloud-firestore-0.21.0/lib/google/cloud/firestore.rb:559:in `default_credentials'
# /home/david/.rvm/gems/ruby-2.4.1#scraper/gems/google-cloud-firestore-0.21.0/lib/google/cloud/firestore.rb:507:in `new'
# ./spec/services/export/firestore_job_export_spec.rb:220:in `block (3 levels) in <top (required)>'
When I checked the documentation, it seems that I need to have some sort of credentials file in a JSON file but I am not sure where I find this file, I cannot see it in https://console.firebase.google.com
it 'where do I get the keyfile so that I can use Server authentication', focus: true do
firestore = Google::Cloud::Firestore.new(project_id: 'jg-jai-dev', credentials: "keyfile.json")
end
Where do you actually get the KeyFile.json?

According to source code you have several options to create your credentials.
In your case your code should work. Try to put your json into keyfile.json in the same folder as a spec with credentials: 'keyfile.json'.
Another option is to create credentials object by yourself:
creds = Google::Auth::Credentials.new private_key: 2048
Google::Cloud::Firestore.new(project_id: 'jg-jai-dev', credentials: creds)

I had to install gcloud and generate a credentials file. The steps I took can be found in quickstart-linux and authentication/getting-started but I have also listed them here.
Download and install gcloud
See: https://cloud.google.com/sdk/docs/quickstart-linux
Run gcloud
gcloud init
Select a default cloud project (not sure if this is needed)
At this stage I had the tooling needed to setup service account and create keyfile.json.
I did need to exit the terminal and run it again before moving onto step 4.
Set up a service account using the command line
gcloud iam service-accounts create [NAME]
gcloud iam service-accounts create my-service-account
Assign specific project permissions
gcloud projects add-iam-policy-binding [PROJECT_ID] --member "serviceAccount:[NAME]#[PROJECT_ID].iam.gserviceaccount.com" --role "roles/owner"
gcloud projects add-iam-policy-binding jg-jai-dev --member
"serviceAccount:my-service-account#cool-project.iam.gserviceaccount.com"
--role "roles/owner"
Generate Key
gcloud iam service-accounts keys create google-cloud-key.json
--iam-account my-service-account#cool-project.iam.gserviceaccount.com
Alter ruby code
Sample Code inside a spec test
it 'do something with firestore', focus: true do
firestore = Google::Cloud::Firestore.new(project_id: 'jg-jai-dev', credentials: "./google-cloud-key.json")
end

Related

Handle credentials in a kubeflow's ContainerOp

I'm trying to run a kubeflow pipeline setup and I have several environements (dev, staging, prod).
In my pipeline I'm using kfp.components.func_to_container_op to get a pipeline task instance (ContainerOp), and then execute it with the appropriate arguments that allows it to integrate with my s3 bucket:
from utils.test import test
test_op = comp.func_to_container_op(test, base_image='my_image')
read_data_task = read_data_op(
bucket,
aws_key,
aws_pass,
)
arguments = {
'bucket': 's3',
'aws_key': 'key',
'aws_pass': 'pass',
}
kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments)
Each one of the environments is using different credentials to connect to it and those credentials are being passed in the function:
def test(s3_bucket: str, aws_key: str, aws_pass: str):
....
s3_client = boto3.client('s3', aws_access_key_id=aws_key, aws_secret_access_key=aws_pass)
s3_client.upload_file(from_filename, bucket_name, to_filename)
so for each environment I need to update the arguments to contain the correct credentials and it makes it very hard to maintain since each time that I want to update from dev to stg to prod I can't simply copy the code.
My question is what is the best approach to pass those credentials?
Ideally you should push any env-specific configurations as close to the cluster as possible (as far away from components).
You can create Kubernetes secret in each environemnt with different creadentials. Then use that AWS secret in each task:
from kfp import aws
def my_pipeline():
...
conf = kfp.dsl.get_pipeline_conf()
conf.add_op_transformer(aws.use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
Maybe boto3 can auto-load the credentials using the secret files and the environment variables.
At least all GCP libraries and utilities do that with GCP credentials.
P.S. It's better to create issues in the official repo: https://github.com/kubeflow/pipelines/issues

AWS CDK Cross Account Lambda Deployment Permission Issue

I followed the following tutorial to create a Lambda deploy pipeline using CDK. When I try to keep everything in the same account it works well.
https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
But my scenario is slightly different from the example because it involves two AWS accounts instead one. I maintain application source code and pipeline
in the OPS account and this pipeline will deploy the Lambda application to the UAT account.
OPS Account (12345678) - CodeCommit repo & CodePipeline
UAT Account (87654321) - Lambda application
As per the aws following aws documentation (Cross-account actions section) I made the following changes to source code.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html
Lambda stack expose deploy action role as follows
export class LambdaStack extends cdk.Stack {
public readonly deployActionRole: iam.Role;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
...
this.deployActionRole = new iam.Role(this, 'ActionRole', {
assumedBy: new iam.AccountPrincipal('12345678'), //pipeline account
// the role has to have a physical name set
roleName: 'DeployActionRole',
});
}
}
In the pipeline stack,
new codePipeline.Pipeline(this, 'MicroServicePipeline', {
pipelineName: 'MicroServicePipeline',
stages: [
{
stageName: 'Deploy',
actions: [
new codePipelineAction.CloudFormationCreateUpdateStackAction({
role: props.deployActionRole,
....
})
]
}
]
});
Following is how I initiate stacks
const app = new cdk.App();
const opsEnv: cdk.Environment = {account: '12345678', region: 'ap-southeast-2'};
const uatEnv: cdk.Environment = {account: '87654321', region: 'ap-southeast-2'};
const lambdaStack = new LambdaStack(app, 'LambdaStack', {env: uatEnv});
const lambdaCode = lambdaStack.lambdaCode;
const deployActionRole = lambdaStack.deployActionRole;
new MicroServicePipelineStack(app, 'MicroServicePipelineStack', {
env: opsEnv,
stackName: 'MicroServicePipelineStack',
lambdaCode,
deployActionRole
});
app.synth();
AWS credentials profiles looks liks
[profile uatadmin]
role_arn=arn:aws:iam::87654321:role/PigletUatAdminRole
source_profile=opsadmin
region=ap-southeast-2
when I run cdk diff or deploy I get an error saying,
➜ infra git:(master) ✗ cdk diff MicroServicePipelineStack --profile uatadmin
Including dependency stacks: LambdaStack
Stack LambdaStack
Need to perform AWS calls for account 87654321, but no credentials have been configured.
What have I done wrong here? Is it my CDK code or is it the way I have configured my AWS profile?
Thanks,
Kasun
The problem is with your AWS CLI configuration. You cannot use the CDK CLI natively to deploy resources in two separate accounts with one CLI command. There is a recent blog post on how to tell CDK which credentials to use, depending on the stack environment parameter:
https://aws.amazon.com/blogs/devops/cdk-credential-plugin/
The way we use it is to deploy stacks into separate accounts with multiple CLI commands specifying the required profile. All parameters that need to be exchanged (such as the location of your lambdaCode) is passed via e.g. environment variables.
Just try to use using the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Or
~/.aws/credentials
[default]
aws_access_key_id=****
aws_secret_access_key=****
~/.aws/config
[default]
region=us-west-2
output=json
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
It works for me.
I'm using cdk version 1.57.0
The issue is in the fact that you have resources that exist in multiple accounts and hence there are different credentials required to create those resources. However, CDK does not understand natively how to get credentials for those different accounts or when to swap between the different credentials. One way to fix this is to use cdk-assume-role-credential-plugin, which will allow you to use a single CDK deploy command to deploy to many different accounts.
I wrote a detailed tutorial here: https://johntipper.org/aws-cdk-cross-account-deployments-with-cdk-pipelines-and-cdk-assume-role-credential-plugin/

AWS Lamda function got deleted. Not able to retrieve

I am using serverless framework in c# to execute queries in athena. AWS Lamda function deleted automatically. When i am trying to deploy it, it's not happening.
sls deploy --stage dev -- To deploy function
sls remove --stage dev -- To remove function
When i tried to redeploy it, it's giving error like below:
As they have mentioned in above screenshot, for more error output i have browsed the link: which shows stack detail. I have attached it below
Refer this image:
[![enter image description here][2]][2]
serverless.yml
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: management-athena
custom:
defaultStage: dev
currentStage: ${opt:stage, self:custom.defaultStage} # 'dev' is default unless overriden by --stage flag
provider:
name: aws
runtime: dotnetcore2.1
stage: ${self:custom.currentStage}
role: arn:aws:iam::***********:role/service-role/nexus_labmda_schema_partition # must validly reference a role defined in your account
timeout: 300
environment: # Service wide environment variables
DATABASE_NAME: ${file(./config/config.${self:custom.currentStage}.json):DATABASE_NAME}
TABLE_NAME: ${file(./config/config.${self:custom.currentStage}.json):TABLE_NAME}
S3_PATH: ${file(./config/config.${self:custom.currentStage}.json):S3_PATH}
MAX_SITE_TO_BE_PROCESSED: ${file(./config/config.${self:custom.currentStage}.json):MAX_SITE_TO_BE_PROCESSED}
package:
artifact: bin/release/netcoreapp2.1/deploy-package.zip
functions:
delete_partition:
handler: CsharpHandlers::AwsAthena.AthenaHandler::DeletePartition
description: Lambda function which runs at specified interval to delete athena partitions
# The `events` block defines how to trigger the AthenaHandler.DeletePartition code
events:
- schedule:
rate: cron(0 8 * * ? *) #triggered every day at 3:00 AM EST.Provided time is in UTC. So 3 A.M EST is 8 A.M UTC
enabled: true
I found out the solution!
Sometimes we won't be able to deploy lamda functions because of many reasons. as #ASR mentioned in comments, there might serverless framework's version issues. But in my case, that didn't solve. Just try deleting the logs group of your function from the cloud watch.
Go to aws -> expand services -> select CloudWatch -> select Logs -> search for your log group select it and delete it. Let's say if your function name is my_function then your log group name will be something like this: aws/lamda/my_function
Then just deploy your lamda function.
I am posting this thinking that it helps someone...! Please correct me if i am wrong.

Aws Ruby SDK credentials from file

I would like to store my credentials in ~/.aws/credentials and not in environmental variables, but I am struggling.
To load the credentials I use (from here)
credentials = Aws::SharedCredentials.new({region: 'myregion', profile_name: 'myprofile'})
My ~/.aws/credentials is
[myprofile]
AWS_ACCESS_KEY = XXXXXXXXXXXXXXXXXXX
AWS_SECRET_KEY = YYYYYYYYYYYYYYYYYYYYYYYYYYY
My ~/.aws/config is
[myprofile]
output = json
region = myregion
I then define a resource with
aws = Aws::EC2::Resource.new(region: 'eu....', credentials: credentials)
but if I try for example
aws.instances.first
I get the error Error: #<Aws::Errors::MissingCredentialsError: unable to sign request without credentials set>
Everything works if I hard code the keys
According to the source code aws loads credentials automatically only from ENV.
You can create credentials with custom attributes.
credentials = Aws::Credentials.new(AWS_ACCESS_KEY, AWS_SECRET_KEY)
aws = Aws::EC2::Resource.new(region: 'eu-central-1', credentials: credentials)
In your specific case, it seems there is no way to pass custom credentials to SharedCredentials.
If you just do
credentials = Aws::SharedCredentials.new()
it loads the default profile. You should be able to load myprofile by passing in :profile_name as an option.
I don't know if you can also override the region though. You might want to try to loose that option, see how it works.

Can't create bucket using aws-sdk ruby gem. Aws::S3::Errors::SignatureDoesNotMatch

I have a new computer and I'm trying to set up my AWS CLI environment so that I can run a management console I've created.
This is the code I'm running:
def create_bucket(bucket_args)
AWS_S3 = Aws::S3::Client.new(signature_version: 'v4')
AWS_S3.create_bucket(bucket_args)
end
Which raises this error:
Aws::S3::Errors::SignatureDoesNotMatch - The request signature we calculated does not match the signature you provided. Check your key and signing method.:
This was working properly on my other computer, which I no longer have access to. I remember debugging this same error on the other computer, and I thought I had resolved it by adding signature_version = s3v4 to my ~/.aws/config file. But this fix is not working on my new computer, and I'm not sure why.
To give some more context: I am using aws-sdk (2.5.5) and these aws cli specs: aws-cli/1.11.2 Python/2.7.12 Linux/4.4.0-38-generic botocore/1.4.60
In this case the issue was that my aws credentials (in ~/.aws/credentials) - specifically my secret token - were invalid.
The original had a slash in it:
xx/xxxxxxxxxxxxxxxxxxxxxxxxxx
which I didn't notice at first, so when I double clicked the token to select the word, it didn't include the first three characters. I then pasted this into the terminal when running aws configure.
To fix this, I found the correct, original secret acceess token and set the correct value in ~/.aws/credentials.

Resources