How do you set the deployment configuration when using the Google API for Deployment Manager? - ruby

I am trying to use the Ruby Google API Client to create a deployment on the Google Compute Platform (GCP).
I have a YAML file for the configuation:
resources:
- name: my-vm
type: compute.v1.instance
properties:
zone: europe-west1-b
machineType: zones/europe-west1-b/machineTypes/f1-micro
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: global/images/myvm-1487178154
networkInterfaces:
- network: $(ref.my-subnet.selfLink)
networkIP: 172.31.54.11
# Create the network for the machines
- name: my-subnet
type: compute.v1.network
properties:
IPv4Range: 172.31.54.0/24
I have tested that this works using the gcloud command line tool.
I now want to do this in Ruby using the API. I have the following code:
require 'google/apis/deploymentmanager_v2'
require 'googleauth'
require 'googleauth/stores/file_token_store'
SCOPE = Google::Apis::DeploymentmanagerV2::AUTH_CLOUD_PLATFORM
PROJECT_ID = "my-project"
ENV['GOOGLE_APPLICATION_CREDENTIALS'] = "./service_account.json"
deployment_manager = Google::Apis::DeploymentmanagerV2::DeploymentManagerService.new
deployment_manager.authorization = Google::Auth.get_application_default([SCOPE])
All of this is working in that I am authenticated and I have a deployment_manager object I can work with.
I want to use the insert_deployment method which has the following signature:
#insert_deployment(project, deployment_object = nil, preview: nil, fields: nil, quota_user: nil, user_ip: nil, options: nil) {|result, err| ... } ⇒ Google::Apis::DeploymentmanagerV2::Operation
The deployment_object type is 'Google::Apis::DeploymentmanagerV2::Deployment'. I can create this object but then I am do not know how to import the YAML file I have into this to be able to programtically perform the deployment.
There is another class called ConfigFile which seems akin to the command line option of specifying the --config but again I do not know how to load the file into this nor then turn it into the correct object for the insert_deployment.

I have worked this out.
Different classes need to be nested so that the configuration is picked up. For example:
require 'google/apis/deploymentmanager_v2'
require 'googleauth'
SCOPE = Google::Apis::DeploymentmanagerV2::AUTH_CLOUD_PLATFORM
PROJECT_ID = "my-project"
ENV['GOOGLE_APPLICATION_CREDENTIALS'] = "./service_account.json"
# Create a target configuration
target_configuration = Google::Apis::DeploymentmanagerV2::TargetConfiguration.new(config: {content: File.read('gcp.yaml')})
# Now create a deployment object
deployment = Google::Apis::DeploymentmanagerV2::Deployment.new(target: target_configuration, name: 'ruby-api-deployment')
# Attempt the deployment
response = deployment_manager.insert_deployment(PROJECT_ID, deployment)
Hope this helps someone

Related

sap-cf-mailer VError: No service matches destination

I have a SAP CAP Nodejs application and I'm trying to send emails from the CAP application using sap-cf-mailer package.
I've created a destination service in the BTP as mentioned in the sample and when I'm trying to deploy the application to BTP it failed.
When I run the application locally using cds watch, it gives following error
VError: No service matches destination
This is my mta.yaml
## Generated mta.yaml based on template version 0.4.0
## appName = CapTest
## language=nodejs; multitenant=false
## approuter=
_schema-version: '3.1'
ID: CapTest
version: 1.0.0
description: "A simple CAP project."
parameters:
enable-parallel-deployments: true
build-parameters:
before-all:
- builder: custom
commands:
- npm install --production
- npx -p #sap/cds-dk cds build --production
modules:
# --------------------- SERVER MODULE ------------------------
- name: CapTest-srv
# ------------------------------------------------------------
type: nodejs
path: gen/srv
parameters:
buildpack: nodejs_buildpack
requires:
# Resources extracted from CAP configuration
- name: CapTest-db
- name: captest-destination-srv
provides:
- name: srv-api # required by consumers of CAP services (e.g. approuter)
properties:
srv-url: ${default-url}
# -------------------- SIDECAR MODULE ------------------------
- name: CapTest-db-deployer
# ------------------------------------------------------------
type: hdb
path: gen/db
parameters:
buildpack: nodejs_buildpack
requires:
# 'hana' and 'xsuaa' resources extracted from CAP configuration
- name: CapTest-db
resources:
# services extracted from CAP configuration
# 'service-plan' can be configured via 'cds.requires.<name>.vcap.plan'
# ------------------------------------------------------------
- name: CapTest-db
# ------------------------------------------------------------
type: com.sap.xs.hdi-container
parameters:
service: hana # or 'hanatrial' on trial landscapes
service-plan: hdi-shared
properties:
hdi-service-name: ${service-name}
- name: captest-destination-srv
type: org.cloudfoundry.existing-service
This is the js file of the CDS service
const cds = require('#sap/cds')
const SapCfMailer = require('sap-cf-mailer').default;
const transporter = new SapCfMailer("MAILTRAP");
module.exports = cds.service.impl(function () {
this.on('sendmail', sendmail);
});
async function sendmail(req) {
try {
const result = await transporter.sendMail({
to: 'someoneimportant#sap.com',
subject: `This is the mail subject`,
text: `body of the email`
});
return JSON.stringify(result);
}
catch{
}
};
I'm following below samples for this
Send an email from a nodejs app
Integrate email to CAP application
Did you create your default-.. json files? They are required to connect to remote services on your BTP tenant. You can find more info about this on SAP blogs like this one:
https://blogs.sap.com/2020/04/03/sap-application-router/
You could also use the sap-cf-localenv command:
https://github.com/jowavp/sap-cf-localenv
This tool is experimental,a s far as I know, this only works for the CF CLI V6. Higher version are fetching the service keys in another format, which leads to the command to fail.
Kind regards,
Thomas

How do I access the Cognito UserPoolClient Secret in Lambda function?

I have created Cognito UserPool and UserpoolClient via Resources in serverless.yml file like this -
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
AccountRecoverySetting:
RecoveryMechanisms:
- Name: verified_email
Priority: 2
UserPoolName: ${self:provider.stage}-user-pool
UsernameAttributes:
- email
MfaConfiguration: OFF
Policies:
PasswordPolicy:
MinimumLength: 8
RequireLowercase: True
RequireNumbers: True
RequireSymbols: True
RequireUppercase: True
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: ${self:provider.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ALLOW_USER_PASSWORD_AUTH
- ALLOW_REFRESH_TOKEN_AUTH
GenerateSecret: true
Now I can pass the Userpool and UserpoolClient as environment variables to the lambda functions like this -
my_function:
package: {}
handler:
events:
- http:
path:<path>
method: post
cors: true
environment:
USER_POOL_ID: !Ref CognitoUserPool
USER_POOL_CLIENT_ID: !Ref CognitoUserPoolClient
I can access these IDs in my code as -
USER_POOL_ID = os.environ['USER_POOL_ID']
USER_POOL_CLIENT_ID = os.environ['USER_POOL_CLIENT_ID']
I have printed the values and they are being printed correctly. However, UserpoolClient also generates one AppClient secret which I need to use while generating secret hash. How shall I access app client secret (UserpoolClient's secret) in my lambda?
Probably now what you hoped for, but you cannot export client secret in CloudFormation explicitly. Take a look at the return values from AWS::Cognito::UserPoolClient. There you can only get the client ID.
What you could do is to create the client in another CF template and either create there a custom resource to read the secret and output it, or have an intermediate step where you get this value with CLI and then pass it into serverless.
There is currently no other option.

Rails 5.2 Shrine and Tus server: Cannot create a custom folder structure to save files

I am using rails 5.2, Shrine 2.19 and tus server 2.3 for resumable file upload
routes.rb
mount Tus::Server => '/files'
model, file_resource.rb
class FileResource < ApplicationRecord
# adds an `file` virtual attribute
include ResumableFileUploader::Attachment.new(:file)
controllers/files_controller.rb
def create
file = FileResource.new(permitted_params)
...
file.save
config/initializers/shrine.rb
s3_options = {
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION']
}
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: 'file_library/shrine_cache', **s3_options),
store: Shrine::Storage::S3.new(**s3_options), # public: true,
tus: Shrine::Storage::Tus.new
}
Shrine.plugin :activerecord
Shrine.plugin :cached_attachment_data
config/initializers/tus.rb
Tus::Server.opts[:storage] = Tus::Storage::S3.new(
prefix: 'file_library',
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION'],
retry_limit: 3
)
Tus::Server.opts[:redirect_download] = true
My issue is I cannot override the generate_location method of Shrine class to store the files in different folder structure in AWS s3.
All the files are created inside s3://bucket/file_library/ (the prefix provided in tus.rb). I want something like s3://bucket/file_library/:user_id/:parent_id/ folder structure.
I found that Tus configuration overrides all my resumable_file_uploader class custom options and have no effect on uploading.
resumable_file_uploader.rb
class ResumableFileUploader < Shrine
plugin :validation_helpers # NOT WORKS
plugin :pretty_location # NOT WORKS
def generate_location(io, context = {}) # NOT WORKS
f = context[:record]
name = super # the default unique identifier
puts "<<<<<<<<<<<<<<<<<<<<<<<<<<<<"*10
['users', f.author_id, f.parent_id, name].compact.join('/')
end
Attacher.validate do # NOT WORKS
validate_max_size 15 * 1024 * 1024, message: 'is too large (max is 15 MB)'
end
end
So how can I create custom folder structure in S3 using tus options (as shrine options not works)?
A tus server upload doesn't touch Shrine at all, so the #generate_location won't be called, but instead the tus-ruby-server decides the location.
Note that the tus server should only act as a temporary storage, you should still use Shrine to copy the file to a permanent storage (aka "promote"), just like with regular direct uploads. On promotion the #generate_location method will be called, so the file will be copied to the desired location; this all happens automatically with default Shrine setup.

Ruby: add new fields to YAML file

I've been searching around for a bit and couldn't find anything that really helped me. Especially because sometimes things don't seem to be consistant.
I have the following YAML that I use to store data/ configuration stuff:
---
global:
name: Core Config
cfg_version: 0.0.1
databases:
main_database:
name: Main
path: ~/Documents/main.reevault
read_only: false
...
I know how to update fields with:
cfg = YAML.load_file("test.yml")
cfg['global']['name'] = 'New Name'
File.open("test.yml", "w"){ |f| YAML.dump(cfg, f) }
And that's essentially everybody on the internet talks about. However here is my problem: I want to dynamically be able to add new fields to that file. e.g. under the "databases" section have a "secondary_db" field with it's own name, path and read_only boolean. I would have expected to do that by just adding stuff to the hash:
cfg['global']['databases']['second_db'] = nil
cfg['global']['databases']['second_db']['name'] = "Secondary Database"
cfg['global']['databases']['second_db']['path'] = "http://someurl.remote/blob/db.reevault"
cfg['global']['databases']['second_db']['read_only'] = "true"
File.open("test.yml", "w"){ |f| YAML.dump(cfg, f) }
But I get this error:
`<main>': undefined method `[]=' for nil:NilClass (NoMethodError)
My question now is: how do I do this? Is there a way with the YAML interface? Or do I have to write stuff into the file manually? I would prefer something via the YAML module as it takes care of formatting/ indentation for me.
Hope someone can help me.
Yo have to initialize cfg['global']['database']['second_db'] to be a hash not nil. Try this cfg['global']['database']['second_db'] = {}

ruby gem to auto reload config files during runtime

I am looking for a ruby gem ( or a idea to develop one) which can refresh config files(yaml) during runtime. So that I can store in variable and use them.
There's a config object in Configurabilty (disclosure: I'm the author) which you can use either on its own, or as part of the Configurability mixin. From the documentation:
Configurability also includes Configurability::Config, a fairly simple
configuration object class that can be used to load a YAML configuration file,
and then present both a Hash-like and a Struct-like interface for reading
configuration sections and values; it's meant to be used in tandem with Configurability, but it's also useful on its own.
Here's a quick example to demonstrate some of its features. Suppose you have a
config file that looks like this:
---
database:
development:
adapter: sqlite3
database: db/dev.db
pool: 5
timeout: 5000
testing:
adapter: sqlite3
database: db/testing.db
pool: 2
timeout: 5000
production:
adapter: postgres
database: fixedassets
pool: 25
timeout: 50
ldap:
uri: ldap://ldap.acme.com/dc=acme,dc=com
bind_dn: cn=web,dc=acme,dc=com
bind_pass: Mut#ge.Mix#ge
branding:
header: "#333"
title: "#dedede"
anchor: "#9fc8d4"
You can load this config like so:
require 'configurability/config'
config = Configurability::Config.load( 'examples/config.yml' )
# => #<Configurability::Config:0x1018a7c7016 loaded from
examples/config.yml; 3 sections: database, ldap, branding>
And then access it using struct-like methods:
config.database
# => #<Configurability::Config::Struct:101806fb816
{:development=>{:adapter=>"sqlite3", :database=>"db/dev.db", :pool=>5,
:timeout=>5000}, :testing=>{:adapter=>"sqlite3",
:database=>"db/testing.db", :pool=>2, :timeout=>5000},
:production=>{:adapter=>"postgres", :database=>"fixedassets",
:pool=>25, :timeout=>50}}>
config.database.development.adapter
# => "sqlite3"
config.ldap.uri
# => "ldap://ldap.acme.com/dc=acme,dc=com"
config.branding.title
# => "#dedede"
or using a Hash-like interface using either Symbols, Strings, or a mix of
both:
config[:branding][:title]
# => "#dedede"
config['branding']['header']
# => "#333"
config['branding'][:anchor]
# => "#9fc8d4"
You can install it via the Configurability interface:
config.install
Check to see if the file it was loaded from has changed since you
loaded it:
config.changed?
# => false
# Simulate changing the file by manually changing its mtime
File.utime( Time.now, Time.now, config.path )
config.changed?
# => true
If it has changed (or even if it hasn't), you can reload it, which automatically re-installs it via the Configurability interface:
config.reload
You can make modifications via the same Struct- or Hash-like interfaces and write the modified config back out to the same file:
config.database.testing.adapter = 'mysql'
config[:database]['testing'].database = 't_fixedassets'
then dump it to a YAML string:
config.dump
# => "--- \ndatabase: \n development: \n adapter: sqlite3\n
database: db/dev.db\n pool: 5\n timeout: 5000\n testing: \n
adapter: mysql\n database: t_fixedassets\n pool: 2\n timeout:
5000\n production: \n adapter: postgres\n database:
fixedassets\n pool: 25\n timeout: 50\nldap: \n uri:
ldap://ldap.acme.com/dc=acme,dc=com\n bind_dn:
cn=web,dc=acme,dc=com\n bind_pass: Mut#ge.Mix#ge\nbranding: \n
header: \"#333\"\n title: \"#dedede\"\n anchor: \"#9fc8d4\"\n"
or write it back to the file it was loaded from:
config.write
Using for example Watchr or Guard you can monitor files and act on changes to them.
The actual action to take when a file changes depends entirely on your specific setup and situation, so you're on your own there. Or you need to provide more information.

Resources