How to update config based on envronment for middleman s3_sync? - ruby

I'm trying to push slate docs to 2 different S3 buckets based on the environment.
But it's complaining that s3_sync is not a parameter for middleman.
I have mentioned the S3 bucket in the environment using config.rb but still I'm getting the above issue when I run bundle exec middleman s3_sync --verbose --environment=internal
config.rb:
configure :internal do
s3_sync.bucket = ENV['INTERNAL_DOCS_AWS_BUCKET'] # The name of the internal docs S3 bucket you are targeting. This is globally unique.
end
activate :s3_sync do |s3_sync|
s3_sync.bucket = ENV['DOCS_AWS_BUCKET'] # The name of the S3 bucket you are targeting. This is globally unique.
s3_sync.region = ENV['DOCS_AWS_REGION'] # The AWS region for your bucket.
s3_sync.aws_access_key_id = ENV['DOCS_AWS_ACCESS_KEY_ID']
s3_sync.aws_secret_access_key = ENV['DOCS_AWS_SECRET_ACCESS_KEY']
s3_sync.prefer_gzip = true
s3_sync.path_style = true
s3_sync.reduced_redundancy_storage = false
s3_sync.acl = 'public-read'
s3_sync.encryption = false
s3_sync.prefix = ''
s3_sync.version_bucket = false
s3_sync.index_document = 'index.html'
s3_sync.error_document = '404.html'
end
Error:
bundler: failed to load command: middleman
(/usr/local/bundle/bin/middleman) NameError: undefined local variable
or method `s3_sync' for #Middleman::ConfigContext:0x0000561eca099a40

s3_sync is only defined within the block of activate :s3_sync.
It is undefined within the configure :internal block.
A solution might look like the following, using environment? or environment
activate :s3_sync do |s3_sync|
s3_sync.bucket = if environment?(:internal)
ENV['INTERNAL_DOCS_AWS_BUCKET']
else
ENV['DOCS_AWS_BUCKET']
end
s3_sync.region = ENV['DOCS_AWS_REGION']
# ...
end

Related

Generate Filename Before Downloading

I'm trying to download the latest backup of data during my chef run but it's trying to download the file before the filename is generated. What's the best approach for doing this. All I want to do is generate a filename based on the time and download it.
The below code gives the error undefined method 'latest_backup' for Custom resource aws_s3_file from cookbook aws.
ruby_block "generate file name" do
block do
require 'time'
latest_backup = "NOT-SET"
utc_now = Time.now.utc
utc_midday = Time.new(Time.new.year, Time.new.month, Time.new.day, 22, 00, 1 ).utc
utc_midnight = Time.new(Time.new.year, Time.new.month, Time.new.day, 10, 00, 1 ).utc
if (utc_now < utc_midday) && (utc_now > utc_midnight )
latest_backup = "data_" + Time.now.strftime("%Y%m%d") + "-00001.tgz"
elsif (utc_now > utc_midday ) && (utc_now < utc_midnight)
latest_backup = "data_" + Time.now.strftime("%Y%m%d") + "-120001.tgz"
end
end
action :create
end
aws_s3_file "/root/backup.tgz" do
remote_path "backup-dir/#{latest_backup}"
bucket "my-backups-bucket"
region "ap-southeast-2"
end
You can't set a local variable across contexts like that. Since nothing in that code requires waiting until converge time, you can just run the code outside of a ruby_block and have it be a normal local variable.

Unable to pass environment variables to Create_Function AWS SDK method in Ruby

I'm trying to execute the following Ruby code and it constantly fails with "unexpected value at params[:environment]" error. I tried many different options for passing Hash to 'environment' parameter but it triggers the same error.
require 'aws-sdk'
client = Aws::Lambda::Client.new(region: 'us-east-1')
args = {}
args[:role] = "some_role"
args[:function_name] = "function"
args[:handler] = "function_handler"
args[:runtime] = "java8"
code = {}
code[:zip_file] = ::File.open("file.jar", "rb").read
args[:code] = code
environment = {}
environment[:variables] = { "AAA": "BBB" }
args[:environment] = environment
client.create_function(args)
Fixed by upgrading Ruby AWS SDK from 2.6 to 2.9

When provisioning with Terraform, how does code obtain a reference to machine IDs (e.g. database machine address)

Let's say I'm using Terraform to provision two machines inside AWS:
An EC2 Machine running NodeJS
An RDS instance
How does the NodeJS code obtain the address of the RDS instance?
You've got a couple of options here. The simplest one is to create a CNAME record in Route53 for the database and then always point to that CNAME in your application.
A basic example would look something like this:
resource "aws_db_instance" "mydb" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_route53_record" "database" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "database.example.com"
type = "CNAME"
ttl = "300"
records = ["${aws_db_instance.default.endpoint}"]
}
Alternative options include taking the endpoint output from the aws_db_instance and passing that into a user data script when creating the instance or passing it to Consul and using Consul Template to control the config that your application uses.
You may try Sparrowform - a lightweight provision tool for Terraform based instances, it's capable to make an inventory of Terraform resources and provision related hosts, passing all the necessary data:
$ terrafrom apply # bootstrap infrastructure
$ cat sparrowfile # this scenario
# fetches DB address from terraform cache
# and populate configuration file
# at server with node js code:
#!/usr/bin/env perl6
use Sparrowform;
$ sparrowfrom --ssh_private_key=~/.ssh/aws.pem --ssh_user=ec2 # run provision tool
my $rdb-adress;
for tf-resources() -> $r {
my $r-id = $r[0]; # resource id
if ( $r-id 'aws_db_instance.mydb') {
my $r-data = $r[1];
$rdb-address = $r-data<address>;
last;
}
}
# For instance, we can
# Install configuration file
# Next chunk of code will be applied to
# The server with node-js code:
template-create '/path/to/config/app.conf', %(
source => ( slurp 'app.conf.tmpl' ),
variables => %(
rdb-address => $rdb-address
),
);
# sparrowform --ssh_private_key=~/.ssh/aws.pem --ssh_user=ec2 # run provisioning
PS. disclosure - I am the tool author

Puppet: How to require an additional resource in a custom type

I am writing a custom type for Puppet and use the following code to copy a module file specified by a puppet url to the user's home directory:
def generate
if self[:source]
uri = URI.parse(self[:source])
path = File.join(Etc.getpwnam(self[:user])[:dir], File.basename(uri.path))
file_opts = {}
file_opts[:name] = File.join(Etc.getpwnam(self[:user])[:dir], File.basename(uri.path))
file_opts[:ensure] = self[:ensure] == :absent ? :absent : :file
file_opts[:source] = self[:source]
file_opts[:owner] = self[:user]
self[:source] = path
Puppet::Type.type(:file).new(file_opts)
end
end
Things are working fine so far. The resource is added to the catalog and created on the agent side. But I have a problem...
How can I specify that this additional file resource must be created before the actual type gets executed? Unfortunatley, I cannot find an example which shows how to specify a dependency on an optional resource that is defined in a generate method.

Bucketeer - Heroku's add-on s3 bucket configuration on Django

I am currently using S3 to serve static files on Heroku. The S3 bucket was created and is managed by me and its settings.py file is the following.
import os
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = '<MY BUCKET NAME>'
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATIC_URL = 'http://' + AWS_STORAGE_BUCKET_NAME + '.s3.amazonaws.com/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
Which is the same as this answer and it works perfectly fine: Django + Heroku + S3
However I wanted to switch to Bucketeer which is a Heroku add-on that creates and manages a S3 bucket for you. But Bucketeer provides different parameters and the static URL looks different and I can't make it work. The URL has the following pattern: "bucketeer-heroku-shared.s3.amazonaws.com/UNIQUE_BUCKETEER_BUCKET_PREFIX/public/". So my updated code is the following.
#Bucketeer
AWS_ACCESS_KEY_ID = os.environ.get('BUCKETEER_AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('BUCKETEER_AWS_SECRET_ACCESS_KEY')
BUCKETEER_BUCKET_PREFIX = os.environ.get('BUCKETEER_BUCKET_PREFIX')
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
#Bucketeer Config
STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
BUCKETEER_BUCKET_PREFIX + '/public/'
#I also tried
#STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
# BUCKETEER_BUCKET_PREFIX + '/'
And this is the error I got.
Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python manage.py collectstatic --noinput
Needless to say no static files were present on the app, so when I ran the suggested command I got:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Which means I'm not authorized to access said bucket. Could somebody shed some light on what is going on here and how to fix it.

Resources