How to create publicly readable object store containers with ruby openstack gem? - ruby

I have tried to create publicly readable openstack object store containers like this:
os = OpenStack::Connection.create(...)
container = os.create_container(container_name)
container.set_metadata({'X-Container-Read' => '.r:*'})
Using my code above, the newly created containers are private.
What is the correct way to create containers with public read permissions with the ruby openstack gem?

You can try the following way.
You can redfine the create_container method
then
class MyStack < OpenStack::Swift::Connection
def create_container(containername)
super
#connection.req("PUT", path, {:headers=>{"Content-Length"=>"0", "X-Container-Read" => ".r:*", "X-Container-Write" => ".r:*}})
OpenStack::Swift::Container.new(self, containername)
end
end
These "X-Container-Read" => ".r:*", "X-Container-Write" => ".r:*" header value you need to set.
or
container.set_metadata({"X-Container-Read" => ".r:*", "X-Container-Write" => ".r:*"})

Here's what I ended up doing:
module PubliclyRedableContainerMonkeyPatch
def create_publicy_readable_container(containername)
raise OpenStack::Exception::InvalidArgument.new("Container name cannot contain '/'") if containername.match("/")
raise OpenStack::Exception::InvalidArgument.new("Container name is limited to 256 characters") if containername.length > 256
path = "/#{URI.encode(containername.to_s)}"
#connection.req("PUT", path, {:headers=>{"Content-Length"=>"0", "X-Container-Read" => ".r:*"}})
OpenStack::Swift::Container.new(self, containername)
end
end
OpenStack::Swift::Connection.include PubliclyRedableContainerMonkeyPatch
os = OpenStack::Connection.create(...)
container = os.create_publicy_readable_container(container_name)
Worksforme. :)

Related

How to update config based on envronment for middleman s3_sync?

I'm trying to push slate docs to 2 different S3 buckets based on the environment.
But it's complaining that s3_sync is not a parameter for middleman.
I have mentioned the S3 bucket in the environment using config.rb but still I'm getting the above issue when I run bundle exec middleman s3_sync --verbose --environment=internal
config.rb:
configure :internal do
s3_sync.bucket = ENV['INTERNAL_DOCS_AWS_BUCKET'] # The name of the internal docs S3 bucket you are targeting. This is globally unique.
end
activate :s3_sync do |s3_sync|
s3_sync.bucket = ENV['DOCS_AWS_BUCKET'] # The name of the S3 bucket you are targeting. This is globally unique.
s3_sync.region = ENV['DOCS_AWS_REGION'] # The AWS region for your bucket.
s3_sync.aws_access_key_id = ENV['DOCS_AWS_ACCESS_KEY_ID']
s3_sync.aws_secret_access_key = ENV['DOCS_AWS_SECRET_ACCESS_KEY']
s3_sync.prefer_gzip = true
s3_sync.path_style = true
s3_sync.reduced_redundancy_storage = false
s3_sync.acl = 'public-read'
s3_sync.encryption = false
s3_sync.prefix = ''
s3_sync.version_bucket = false
s3_sync.index_document = 'index.html'
s3_sync.error_document = '404.html'
end
Error:
bundler: failed to load command: middleman
(/usr/local/bundle/bin/middleman) NameError: undefined local variable
or method `s3_sync' for #Middleman::ConfigContext:0x0000561eca099a40
s3_sync is only defined within the block of activate :s3_sync.
It is undefined within the configure :internal block.
A solution might look like the following, using environment? or environment
activate :s3_sync do |s3_sync|
s3_sync.bucket = if environment?(:internal)
ENV['INTERNAL_DOCS_AWS_BUCKET']
else
ENV['DOCS_AWS_BUCKET']
end
s3_sync.region = ENV['DOCS_AWS_REGION']
# ...
end

Is there a ruby method for finding a blob uri?

I checked the whole azure-storage-blob gem and didn't find any way to get the URI for a blob. Is there some way to construct it correctly and in a generic way that will work for any other blob in any region?
I used S3 SDK before and I'm well grounded in S3 but new to Azure.
There is a protected method called blob_uri that looks like this:
def blob_uri(container_name, blob_name, query = {}, options = {})
if container_name.nil? || container_name.empty?
path = blob_name
else
path = ::File.join(container_name, blob_name)
end
options = { encode: true }.merge(options)
generate_uri(path, query, options)
end
So you could take the short cut of:
blob_client = Azure::Storage::Blob::BlobService.create(storage_account_name: 'XXX' , storage_access_key: 'XXX')
blob_client.send(:blob_uri, container_name,blob_name)
However, the actual URI is simply:
https://[storage_account_name].blob.core.windows.net/container/[container[s]]/[blob file name]
So since you have to know the blob name and the container to access to blob.
File.join(blob_client.host,container,blob_name)
Is the URI to the blob

puppet functions exec windows command line

Currently i am trying to automate the start mode in windows server services. i tried to use puppetlabs registry but realized that the module didn't work as i expected.
Basically i have list of windows services that i need to update on each server but on some servers, the services might not exist, but puppetlabs registry will just create the new key if it's not exist which is not the expected behaviour. By right, it should work as mentioned below:
Check whether the service is in the servers or not
If it does, then update the start mode as mentioned inside the manifest/hiera
If not exist, just do nothing and skip to the next service immediately
Based from what i knew, it seems the only way to check whether the service key exist or not is by using custom function. So i already tried to write some custom function using win32/registry, but was unsuccessful by getting some error such as Win32API not supported. Another way i can think of is using the reg command line to check whether the key exist or not. Here is the puppet code functions:
module Puppet::Parser::Functions
newfunction(:check_winservice_exist, :type => :rvalue) do |args|
service_name = args[0]
unless args.length > 0 then
raise Puppet::ParseError, ("check_winservice_exist(): wrong number of arguments (#{args.length}; must be > 0)")
end
command = "reg query HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\#{service_name} /f DisplayName"
result = system command
return result
#if result == true
# return result
#else
# return result
#end
end
end
When i run the simplified ruby scripts in command line, it works and return the expected value. But when i used above scripts as puppet custom functions, it always return empty.
This is my first time to write a puppet custom functions so i am not sure what i did wrong here. Please advise whether there are another alternative that i can use to resolve the issue or maybe advise on what i did wrong on the functions script
I managed to resolve this issue by using custom facter as suggested by Matt. Just sharing the custom facter scripts that i used. It might not be perfect as i am still not really proficient in using ruby.
require 'win32/registry'
Facter.add(:winservices) do
confine :kernel => "windows"
setcode do
keyname= 'SYSTEM\CurrentControlSet\Services'
access = Win32::Registry::KEY_ALL_ACCESS
arr = []
winservices_list = []
Win32::Registry::HKEY_LOCAL_MACHINE.open(keyname, access) do |reg|
service_lists = (reg.each_key { |key, wtime| arr.push key })
arr.each do |service|
service_key = "SYSTEM\\CurrentControlSet\\Services\\#{service}"
begin
Win32::Registry::HKEY_LOCAL_MACHINE.open(service_key, access) do |reg|
value = reg['Start']
winservices_list.push service
end
rescue
end
end
winservices_list
end
end
end
And it simply works just by adding simply checking whether the service name is in the array or not
if $service_name in $facts['winservices'] {
service { "${service_name}":
provider => 'windows',
enable => $start_real,
}
}

How to upload an image to Amazon S3 into a folder in ruby?

I am trying to do it like this:
AWS.config(
:access_key_id => '...',
:secret_access_key => '...'
)
s3 = AWS::S3.new
bucket_name = 'bucket_name'
key = "#{File.basename(avatar_big)}"
s3.buckets[bucket_name].objects[key].write(:file => avatar_big_path)
This working well for a file, the file is uploaded to the root of the set up bucket.
However, how to upload it into the foloder photos that is located in root?
I've tried
key = "photos/#{File.basename(avatar_big)}"
but this doesn't work.
EDIT: error message
Thank you
I had the same issue as the OP. This is what worked for me:
key = "photos/example.jpg"
bucket = s3.buckets[bucket_name]
filepath = Pathname.new("path/to/example.jpg")
o = bucket.objects[key]
o.write(filepath)
Something I would check out would be the object key you are trying to use. There's not much documentation on what are the restrictions are (see this and this) but the one shown in error message looks suspicious to me.
Try including the the path in the file key:
s3.buckets[bucket_name].objects[key].write(:file => "photos/#{avatar_big_path}")

"resources"-directory for ruby gem

I'm currently experimenting with creating my own gem in Ruby. The gem requires some static resources (say an icon in ICO format). Where do I put such resources within my gem directory tree and how to I access them from code?
Also, parts of my extension are native C code and I would like the C-parts to have access to the resources too.
You can put resources anywhere you want, except in the lib directory. Since it will be will be part of Ruby's load path, the only files that should be there are the ones that you want people to require.
For example, I usually store translated text in the i18n/ directory. For icons, I'd just put them in resources/icons/.
As for how to access these resources... I ran into this problem enough that I wrote a little gem just to avoid repetition.
Basically, I was doing this all the time:
def Your::Gem.root
# Current file is /home/you/code/your/lib/your/gem.rb
File.expand_path '../..', File.dirname(__FILE__)
end
Your::Gem.root
# => /home/you/code/your/
I wrapped this up into a nice DSL, added some additional convenience stuff and ended up with this:
class Your::Gem < Jewel::Gem
root '../..'
end
root = Your::Gem.root
# => /home/you/code/your/
# No more joins!
path = root.resources.icons 'your.ico'
# => /home/you/code/your/resources/icons/your.ico
As for accessing your resources in C, path is just a Pathname. You can pass it to a C function as a string, open the file and just do what you need to do. You can even return an object to the Ruby world:
VALUE your_ico_new(VALUE klass, VALUE path) {
char * ico_file = NULL;
struct your_ico * ico = NULL;
ico_file = StringValueCStr(path);
ico = your_ico_load_from_file(ico_file); /* Implement this */
return Data_Wrap_Struct(your_ico_class, your_ico_mark, your_ico_free, ico);
}
Now you can access it from Ruby:
ico = Your::Ico.new path

Resources